You are on page 1of 51

1

SOFTWARE QUALITY
1. Meet customer requirements in terms of functionality
2. Meet customer expectations in terms of performance, reusability, compatibility
3. Cost to purchase by customers
4. Time to release by development organization

SOFTWARE DEVEOPMENT PROCESS


Requirements gathering

Analysis and planning

Design (Logical representation)

Coding (Physical representation)

Testing

Release and Maintenance

SOFTWARE DEVELOPMENT PROCESS MODELS


1. Waterfall Model (Requirements are clear)
2. Prototype Model (Requirements are ambiguous (Confusion))
3. Spiral Model (Requirements are enhancing)
4. Agile Model (Requirements are changing)

Waterfall Model: Requirements are clear and constant

Prototype Model: Requirements are ambiguity, the software organization is developing


sample model first and then go to realsoftware.

Spiral Model: When the requirements are enhancing e.g. eseva

Agile Model: When the customer requirements are suddenly changing. E.g. Mobile
Application

1
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
2

Note 1:
All software development models are derived from waterfall model (Linear
sequential model)

Note 2:
Above all software development process models are maintaining single stage of
testing and that stage is conducting by same development people.

Testing is of two types they are SQA & SQC

Software Quality Assurance (SQA)


The monitoring and measuring the strength of
development process is called SQA

Software Quality Control (SQC)


The validation of software product w.r.t
customer requirements and expectations.

FISH MODEL
(Multiple stages of development and testing)

Analysis Design Coding


Maintenance
BRS SRS HLD & LLD system testing

Test Software changes

Real Time Terms of System Testing


BRS:
Business requirement specification is defining the requirements of the customer to
be developed as new software. This document is also known as customer requirement
specification (CRS) and user requirement specification (URS)

SRS:
Software requirement specification is defining functional requirements to be
developed and system requirements to be used (converting non-technical information in to
technical information. Derived from BRS). E.g. Bank deposit = addition (Functional
requirements) + some languages (System requirements)

Review:

2
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
3

The completeness and correctness ofa document is estimating by responsible


people through reviews. E.g. walk through (study from first line to last line of document),
inspections (searching for a particular issue), and peer reviews (comparing with other
documents)

Design:
HLD:
High-level design document defines the overall architecture of system from root
module (e.g. login) to leaf module (e.g. logout). This document is also known as
architecture design or external design.

LLD:
Low-level design document defines the internal architecture of corresponding
module of functionality. This document is also known as internal design documents. E.g.
Website like yahoo.

Prototype:
A sample model of software is called prototype. It consists of interface
(screens) with out having functionality.

Coding:
Program:
It indicates a set of executable statements, some statements in program are
taking inputs, some statements are performing process and other statements aredisplaying
output.
Module (unit) is a combination of programs and the software is a combination
of modules.

White Box Testing Techniques:


These are program based testing techniques. These
techniques are also known as glass box testing (or) open box testing. The responsible
people are using these techniques to verify internal structure of corresponding program.

System Testing:
Black Box Testing:
It is a system level testing technique. The responsible are using
these techniques to validate external functionality.

Build:
3
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
4

A.exe the executable form of a system is called build (or) A finally integrated all
modules set is called build.

AUT: Application under test

V-MODEL
V stands for Verification and Validation. This model defines conceptual mapping in
between development stages and testing stages

W.r.p.t
BRS/CRS/URS--------------------- User Acceptance Testing

Analysis reviews
SRS--------------------- System Testing

HLD-----------Integration testing (programmers)

Designer reviews

LLDUnit Testing (programmers)

Coding (programmers)

In the above V-Model the multiple stages of development process is embedding


(combining) with multiple stages of testing process. From this model, the maximum
organizations are maintaining separate testing team only for system testing stage because,
that stage is a bottleneck stage (phase) in software development process. After system
testing the organizations are planning to release the software to customer site.

1. The Reviews In Analysis:


Generally the software development process
starting with requirements gathering and analysis. In this phase business analyst
category people are developing BRS and SRS documents. For completeness and
correctness of documents, the same business analyst category people conducting a
review meeting. In this review they are concentrating on below factors.

Are they right requirements? (Isthey are correct)

4
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
5

Are they complete requirements? (Is any missing?)

Are they achievable requirements?(Whether it is possible)

Are they reasonable requirements? (Time factor)

Are they testable requirements? (E.g. satellite applications)

2. Reviews In Design:
After completion of analysis and their testing. Designer
category people are developing HLD and LLD. To verify completeness and
correctness of those documents the same designer category people are conducting a
review meeting. In this review they are concentrate on below factors

Are they understandable designs? (Flow of diagram)

Are they designing right requirements? (With correct functions)

Are they designing complete requirements? (What the project need)

Are they following able designs? (Understandable to next level 1)

3. Unit Testing:
After completion of design and their reviews programmers are
concentrating on coding, to physically construct a software build. In this phase
programmers are writing programs and verify that programs using white box testing
techniques. They are of four types

a. Basis Path Testing


b. Control Structure Testing
c. Program Technique Testing
d. Mutation Testing

a. Basis Path Testing:


During these test, programmers is verifying that whether a
program is running or not. In this basis path testing programmers are following on
below procedure to test complete program.
Step 1: Draw flow diagram for that program
Step 2: Calculate number of independent paths in that program

IF
T F

IF IF
T F T F

(Cyclomatic complexity) = 4

5
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
6

b. Control Structure Testing:


During this test, programmers are concentrating
on the correctness and completeness of corresponding programs output. They are check
every statement including if conditions, for loops, memory allocations… etc.

c. Program Technique Testing:


During this test, programmers are verifying the
execution speed of corresponding programs. In this testing programmers are taking the
help of monitors and profilers. If the program speed is not good then, programmers
are performing changes in structure of that program with out disturbing functionality.

d. Mutation Testing:
Mutation means a change in program. Programmers are
performing wanted changes in programs and performing test repeatedly. In this test
repetition programmers are verifying completeness and correctness of that test on
program.

Test Test Test

Executable Change Change


Statements

Passed Passed Passed Failed


(Incomplete Testing) (Complete Testing)

4. Integration Testing:
After completion of dependent programs development and
unit testing, programmers are connecting them to form a complete software build. In
this Integration of programs, programmers are verifying interfaces in between every
two programs on modules. There are four types of approaches to integration modules
such as top down, bottom up, hybrid and big bang approaches.

a. Top-down Approach:
In this model programmers are interconnecting
main modules to sub modules in the place of under constructive, programmers
are using temporary programs called as stubs (or) called programs

Main

6
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
7

Stub (Diverts control to main module)


Disconnects

Sub1 Sub2

b. Bottom-up Approach:
In this model programmers are interconnecting
sub modules with out using under constructive main module. In this place of
that under constructive main module, programmers are using a temporary
program called as driver (or) calling program

Main

Driver (Diverts to next stage)

Sub1

Sub2

c. Hybrid-Approach:
This approach is a combined form of top down of
top down and bottom up approaches. It is also known as sandwich approach.

Main

Driver

Sub1

Stub

Sub2 Sub3

7
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
8

d. Big-Bang Approach:
In this model programmers are interconnecting
programs after completion of complete coding.

5. System Testing:
After completion of integration testing development people
are releasing software build to the separate testing team. This separate testing team is
validating that software build w.r.p.t customer requirements. In this level of testing, the
separate testing team is using Black Box Testing Techniques. These techniques are
classified into 3 categories

a. Usability Testing
b. Functional Testing
c. Non-Functional Testing

a. Usability Testing:
Generally the system test execution is a starting with
usability testing. During this test, test engineers are validating user friendliness
of every screen in our application build. This usability testing is also known as
Accessibility Testing. This usability testing consists of two sub techniques.

1. User Interface Testing


2. Manual Support Testing

1. User Interface Testing:


(Testing as early as possible before build)
during this test, test engineers are applying below 3 factors on every screen of
our application build.

Screens of Build
Ease of use (understandable screens)
Look & Feel (Attractive)
Speed in Interface. E.g. short navigations.

2. Manual Support Testing:


During this test, test engineers are
studying help documents of our application build to estimate context
sensitiveness (Test is applied before releasing software build)

Conclusion:
Generally the technical writers of our company are developing
user manuals before release the software to customer site. Due to this reason the
manual support testing is coming in to picture at the end of system testing.

8
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
9

Receive software build from developers

User Interface Testing

Usability
Testing Functional & Non Functional Testing

Manual Support Testing

b. Functional Testing:
It is a mandatory (Compulsory) testing level in
system testing. During this functional testing, test engineers are concentrating
on “meet customer requirements”. This functional testing is classified into
two sub testing techniques. They are functionality testing and sanitation
testing

1. Functionality Testing:
During this test, test engineers are verifying that
whether our build functionalities are working as correct or not? In this this
testing, test engineers are concentrating on below coverage’s.

GUI Coverage (or) Behavior Coverage (Changes in properties


of objects in screens).

Error Handling Coverage (Verify the prevention of wrong


Operations)

Input Domain Coverage (Verify the size & type of every input
object values)

Manipulations Coverage (Correctness of outputs)

Backend Coverage (The impact of front-end operations on


back-end tables)

9
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
10

Order of Functionalities Coverage(

2. Sanitation Testing:
It is also known as Garbage Testing. During this
test, test engineers are finding extra functionalities in our application build
w.r.p.t customer requirements.

Note:
One defect means not only missing functionality, not only mistake in
functionality but also extra functionality.

c. Non-Functional Testing:
During this non-functional testing, testing
team is concentrating on extra characteristics of that software build to satisfy
customer site people.

1. Compatibility Testing:
It is also known as Portability Testing.
During this testing, test engineers are validating that whether our application
build is running on customers expected platform or not? Platform means that
operating system, compilers, browsers, and other system software’s

2. Configuration Testing:
It is also known as Hardware Compatibility
Testing. During this test, test engineers are running our application build with
various technologies of hardware devices to estimate hardware compatibility.

E.g. Different technology printers, different technology network, different


topology networks.

3. Recovery Testing:
It is also known as Reliability Testing. During this
test, test engineers are validating that whether our application build is changing
from abnormal state to normal state or not?

Abnormal

Changing from Abnormal to normal


using back-up and Recovery
Procedures

10
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
11

Normal

4. Inter-System Testing:
It is also known as Interoperability Testing
(or) End-to-End Testing. During this test, test engineers are validating that
whether our application build is co-existence with other software applications to
share common resources.

E-Seva

Server

WBA server
EBA Local
TBA DB server
ITA
Server
News component
(Existing Component) Common Resource

5. Security Testing:
It is also known as penetration testing. During this
test, test engineers are validating below three factors such as

a. Authorization Testing
b. Access Control (Authentication) Testing
c. Encryption / Decryption Testing

a. Authorization Testing:
In Authorization Testing, test engineers are
validating that whether our application build is allowing valid users and is
preventing invalid users or not?

b. Access control Testing:


In Access Control Testing, test engineers are
validating permissions of users to utilize our application build services.

c. Encryption / Decryption Testing:


In Encryption / Decryption
Testing, test engineers are trying to trace cipher text to original text.

Client Server

11
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
12

Receiver Response
Request (Sender) Receiver Original Text (sender)

Encryption Decryption
Description
Cipher Text Encryption

Cipher Text

Note:
In above Security Testing, Authorization and Access Control Tests are reasonable to
be applied by test engineers. But Encryption / Decryption test is conducted by separate
people in security team

6. Data Volume Testing:


It is also known as Storage Testing (or) Memory
Testing. During this test, test engineers are finding peek limit of data handled by our
application build.

E.g. Ms-Access technology database are supported 2 GB database as maximum

7. Load Testing:
Load means that the numbers of con-current users, which are
accessing our application, build. The execution of our application build on customer
expected configuration and customer expected load to estimate performance is called Load
Testing.

8. Stress Testing:
The execution of our application build under customer expected
configuration and various loads levels to estimate stability is called Stress Testing.

9. Installation Testing:

12
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
13

During this test, test team is doing a practice on software


installation. During this testing, testing team is taking software build and remaining
supported software’s to run our application build in customer side.

10. Parallel Testing:


It is also known as Comparative (or) Competitive Testing.
During this test, test engineers are comparing our software build with old-version of
software build (or) with other company’s. This testing is applicable to software products
only.

6. User Acceptance Testing:


After completion of system testing and bugs
resolving, the project management is concentrating on User Acceptance Testing to
collect feedback from Real Customers (or) Model customers. There are two ways to
conduct User Acceptance Testing.

Alpha Testing Beta Testing

By Real Customers By Model Customers


In our development site In Model customers site
For Software applications For Software products

After Completion of user acceptance testing and their


modifications, project management is concentrating on software release and maintenance.

7. Maintenance (or) Support:


Project Management is defining the release team
along with few developer, few test engineers and few hardware engineers. This release
team is conducting Port Testing (or) Deployment (or) Release Testing. During this
test release team is observing below factors.

Complete Installation

Overall Functionality

13
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
14

Input Devices Handling

Output Devices Handling

Secondary Storage Devices Handling

Operating System Error Handling

Co-Existence with other Software Applications

After Completion of above observations release team is


giving training to customer site people and then coming back to our organization.

During Utilization of that software, customer site people


are sending change request to our organization. The responsible team in your
organization is handling the change request to provide service to customer site.
This responsible team is also known as Change Control Board (C.C.B).

Change Request C.C.B

Enhancement Missed Defect

Impact Analysis Impact Analysis

Perform Software Changes Perform Software Changes

14
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
15

Test Software Changes Test Software Changes

Improve Test Efficiency

Numbers Testing Phase Testing Responsibility


Techniques
1 Testing in Analysis Reviews Business Analyst
2 Testing in Design Reviews & Prototype Designers (or)
Architects
3 Unit Testing White Box Testing Programmers
Techniques
4 Integration Testing Top-down, Bottom-up, Programmers
Hybrid, Big-bang
approaches
5 System Testing Black box testing Test Engineers
Techniques
6 User Acceptance Alpha, Beta Testing Real Customers (or)
Testing Model customers
7 Port Testing Complete Installation, Release Team
Overall
Functionalities, Input
& Output Devices
Handling, Secondary
Storage Devices
Handling, Operating
System Error
Handling, Co-Exist
Nance with other
Software Applications
8 Test Software Regression Testing Change Control Board
Changes in (C.C.B)
Maintenance

15
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
16

Planned Testing Vs AD-HOC Testing:


Generally every testing team is planning
to conduct complete system testing w.r.p.t requirements of the project sometimes the
testing team is not able to conduct complete testing due to risks or challenges.
e.g. Time Problem, skills, lack of time, lack of knowledge, lack of resources, lack of
documentation…….etc
Due to above risks are challengers; testing team is planning to
follow some informal testing methods.

1. Monkey Testing:
During this style of testing, testing people are concentrating
on main activities of software due to lack of time for testing. This style of testing is also
known as champagne Testing (or) Random Testing

2. Buddy Testing:
In this style of testing, test engineers is grouping with
developer to conduct testing while coding due to lack of time for testing. Buddy means
a programmer and a tester as a group.

3. Exploratory Testing:
Generally the testing team is conducting system testing
depending on functional and system requirement in SRS. If the SRS is not giving the
complete information about requirementsthen test engineers are depending on past
experience, discussions with other similar projects, browsing ….etc, to collect complete
information about requirements. This style of testing is also calling as exploratory
testing.

4. Pair Testing:

16
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
17

In this style of testing, junior test engineers are grouping with


senior test engineers in an organization to share their knowledge on testing. This style
of testing is also called pair testing.

5. Be-Bugging:
In this model development people areadding known bugs in to
coding and releasing to testing team. This type of defect seeding (or) feeding is useful
to estimate of testing people. It is also known as defect seeding (or) defect feeding.

TESTING-TERMINOLOGY
1. Test Strategy:
It is a document and it defines the requiredtesting approach to be
followed by testing people.

2. Test Plan:
It is a document and it provides work allocation in terms of schedule

3. Test Case:
It defines a test condition to validate functionality in terms of
completeness and correctness.

4. Test Log:
It defines the result of a test case in terms of passed / failed, after
execution of the test case on our application build.

5. Error Defect & Bug:


A mistake in coding is called error. This mistake found by
test engineer during testing, called defect (or) issue. This defect reviewed and accepted
by development team resolved, called bugs.

6. Re-Testing:
It is also known as Data Driven Testing (or) Iterative Testing.
Generally the test engineers are repeating the test on same application build with
multiple input values. This type of repetition is called Re-Testing.

7. Regression Testing:
The re-execution of selected test cases on modified build to
ensure bug fix work with out any side effects is called Regression Testing. (Change
Testing)

17
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
18

SYSTEM TESTING PROCESS

Test Initiation Test Planning Test Design Test Execution


Test Closure

Test Reporting

Software Development Process along with Software Testing Process


Requirements Gathering (B.R.S)

Analysis and project planning (SRS & Project Plan)

Development Team System Testing Team

Design & Reviews Test Imitation

Coding & Unit Testing Test Plan

Integration Testing Test Design

Initial Build

Test Execution Test Reporting

Test Closure

18
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
19

User Acceptance Testing

Sign Off (Release the Build to Customer)

1. System Test Initiation:


Generally the every organization system testing
process is starting with test initiation. In this phase project manager (or) Test Manager
are preparing test strategy document. This document defines required testing
approaches to be followed by testing components in test strategy.

Components in Test Strategy:


1. Scope and objective: The importance of testing and their milestones

2. Business Issue’s: Cost allocation in between development process andtesting


process.

3. Test Approach: selected list of test factors (or) test issues to be applied by testing
team on corresponding software build. This selection is depending on requirements
in that software, scope of that requirements and the risks involved in that project
testing

4. Roles & Responsibilities: The name of jobs in testing team and their
responsibilities.

5. Communication & Status Reporting: They required negotiation in between every


two consecutive testing jobs in testing teams

6. Test Automation & Testing Tools: The purpose of automation and available tools
in our organization

7. Defect Reporting & Tracking: They required negotiation in between the testing
team and development team to review and resolve defects during testing.

8. Testing Measurements & Metrics: To estimate quality, capability and status,


testing team is using a set of measurements and metrics.

9. Risks and Assumptions: The expected list of problems and their solutions to
overcome.

10. Change and Configuration Management: Managing the development and testing
deliverables for future reference

19
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
20

11. Training Plan: The required no. of training sessions for testing team to understand
customer requirements (or) Business logic.

12. Test Deliverables: Names of test documents to be prepared by testing team during
testing.
E.g. Test plan, test cases, test log, defect reports, and summary reports.

Test Factors (or) Test Issues


1 Authorization Validity of users
2 Access Control Permission of users to use specific services functionality.
3 Audit Trail The correctness of metadata
4 Continuity of Integration of programs
processing
5 Data Integrity Correctness of Input data.
6 Correctness Correctness of output values & manipulations. E.g. Mail
compose correctly working or not
7 Coupling Co-existence with other software’s to share common
resources
8 Ease of use User Friendly Screens
9 Ease of operate Installation, Un installation, dumping, down loads, up
loading………..etc
10 Portable Run on different platforms
11 Performance Speed of processing
12 Reliability Recover from abnormal situations
13 Service levels Order of functionalities or services to give support to
customer site people
14 Maintainable Whether our software is long time serviceable to customer
site people or not?
15 Methodology Whether our testing team is following pre-defined approach
properly or not?

20
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
21

Test Factors V/s Testing Techniques:


Test factor indicates a testing issue or
topic. To test every topic in our project. Testing team is following a set of testing
techniques.

1 Authorization Security Testing


2 Access Control Security Testing
3 Audit Trail Functional Testing
4 Continuity of Integration testing
processing
5 Data Integrity Functionality Testing
6 Correctness Functionality Testing
7 Coupling Inter System Testing
8 Ease of use User Interface (or) Manual Support Testing
9 Ease of operate Installation Testing
10 Portable Compatibility & Configuration Testing
11 Performance Load & Stress Testing & Data Volume Testing
12 Reliability Recovery, Stress Testing
13 Service levels Regression or Software Change Testing (C.C.B)
14 Maintainable Compliance Testing
15 Methodology Compliance Testing

Compliance Testing:
Whether our project team is following our company
standards or not?

Case Study:

Test Factors ----- 15


-4 (w.r.p.t depends on project requirements)
-------
11
+1 (w.r.p.t depends on scope of requirements)
------
12
-3 (w.r.p.t risks in testing)
-------
9 (Finalized factors / issues)
-------

21
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
22

In the above example the Project Manager / Test Manager finalized 9 testing topics / issues
to be applied by testing team on project / s/w build.

2. Test Planning:
After completion of test strategy finalization, the test lead
category people are developing test-planned documents. In this stage the test lead
category people are preparing system test plan and divide that plan in to module test
plans. Every test plan is defining “What to Test?”, “How to Test?”, “When to
Test?”, “Who to Test?”
To develop these test plans, test lead category people are following
below approach

Test Team formation


Project Plan Identify tactical risks
Prepare test plans
Development Documents (SRS) review test plans Test Plans

Test Strategy

System test plans are compulsory, module test plans are optional

a. Testing Team Formation:


Generally the test-planning task is starting with
testing team formation. In this stage, test lead is depending on below factors

 Project size. E.g. lines of codes or functional points


 Available no. of test engineers
 Test Duration
 Test Environment Resources. e.g. Testing Tools

Case study:

C/s, website, ERP—3 to 5 months of system testing


System s/w (embedded, mobile...)—7 to 9 months of system testing
Machine Critical s/w (A.I, Robots, Satellites) – 12 to 15 months of system
testing

b. Identify Tactical Risks:


After completion of testing team formation, test
lead is concentrating on the risks analysis and assumptions w.r.p.t that formed
testing team.
Risk-1: Lack of knowledge of test engineers on that project
Risk-2: Lack of time
Risk-3: Lack of documentation
Risk-4: Delay in
Risk-5 Lack of development process rigor
Risk-6 Lack of resources

22
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
23

Risk-7 Lack of communication

c. Prepare Test Plan:


After completion of testing team formation and risks
analysis, test lead is concentration on test plan document preparation. In this
stage test lead is using IEEE-829 test plan document format (Institute of
Electrical & Electronics Engineers)

Format:

1. Test Plan ID: The title of test plan documents for future reference

2. Introduction: About Project

3. Test Items: List of modules in our project

4. Features to be tested: List of modules or functions to be tested

5. Feature not to be tested: List of modules, which are already tested in


previous version testing.

3 to 5 “What to Test?”

6. Approach: List of testing techniques to be applied on above (selected)


Modules

7. Test deliverables: Required testing documents to be prepared by test


engineer

8. Test Environment: Required hardware’s & software’s to conduct testing


on above module.

9. Entry Criteria: Test Engineers are able to start test execution after creating
below criteria

 Test cases developed and reviewed


 Test environment established
 Software build received from developers

10. Suspension Criteria: Some time test engineers are stopping test execution
part timely due to

 Test Environment is not working


 Pending defects (quality gap, job gap) are more at development side

11. Exit Criteria: It defines test execution process exit point

23
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
24

 All requirements tested


 All major bugs resolved
 Final build is stable w.r.p.t customer requirements

12. Staff & Training Needs: The selected test engineer’s names and required
no. of training sessions for them

13. Responsibilities: The mapping in between the namesof test engineers and
the requirements in our project.

14. Schedule: Dates and time “When to Test?”

15. Risks & Assumptions: List of analyzed risks and their assumptions to
overcome

16. Approvals: Signatures of P.M or T.M & Test Lead

d. Review Test Plan:


After completion of test plan documents preparation,
test lead is conducting a review meeting to estimate completeness and
correctness of that document. In this review meeting the selected testing team
members that project are also involving.

3. Test Design:
After completion of test planning, the corresponding selected test
engineer’s are concentrating on test design, test execution & test reporting.

Generally the selected test engineers are starting testing job with
the test design in every project. In this test design every test is studying all
requirements of the project and preparing test cases for selected requirements
only w.r.p.t test plan.

In this test design, test engineers are using three types of test case
design methods to prepare test cases for responsible requirements.

1. Functional & System Specification Based Test Case Design


2. Use Cases Based Test Case Design
3. Application Build Based Test Case Design

Test Case:
Every test case is defining a unique condition. The every test case is self-
standing and self-cleaning to improve understandability in test design, test engineers

24
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
25

are starting every test case with verify or check English words.The every test case is
traceable with requirement in your project

1. Functional & System Specification Based Test Case Design:


B.R.S

S.R.S (Functional & System Requirements) Test cases

(Run) Test
H.L.D & L.L.D Execution

. Exe (Build)
Coding Build

From the above diagram the test engineers are preparing the maximum test
cases depending on Functional & System Requirements in S.R.S. In this type of test case
writing, test engineers are following below approach.

Approach For Writing Test Cases:


1. Step 1: Collect functional and system specifications for responsible requirements
(modules)

2. Step 2: Select one specification from that list


2.1. Identify entry point (start)
2.2. Identify inputs required
2.3. Study normal flow
2.4. Identify outputs & outcomes
2.5. Identify exit point (End)
2.6. Identify alternative flow & exceptions (rules)

3. Step 3: Prepare test case titles or test scenarios

4. Step 4. Review the test case titles for completeness and correctness

5. Step 5: Prepare complete documents for every test case title

6. Step 6. Go to step 2 until all specifications study and test cases writing

Functional Specification: 1
A login process allows user id and password to
authorize users. User id is taking alpha numeric in lower case from 4 to 16 characters long.
The password is taking alphabets in lowercase from 4 to 8 characters long.

25
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
26

Prepare test case titles or scenarios:

Test Case 1: Verify user id value

BVA (Boundary value analysis) (Size)

Min --------- 4 chars----- pass


Max--------- -16 chars----- pass
Min-1-------- -3 chars------fail
Min+1---------5 chars----- pass
Max-1------- 15 chars------pass
Max+1------ 17 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
a-z A-Z
0–9 Special Characters
Blank Fields

Test Case 2: Verify password value

BVA (Boundary value analysis) (Size)

Min --------- 4 chars----- pass


Max------------8 chars----- pass
Min-1-------- -3 chars------fail
Min+1---------5 chars----- pass
Max-1---------7 chars------pass
Max+1--------9 chars------fail

Test Case 3: Verify login operation

Decision Table:

User id Password Criteria

Valid Valid Pass

26
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
27

Valid Invalid Fail


Invalid Valid Fail
Value Blank Fail
Blank Value Fail

Functional Specification – 2:
In an insurance application users can apply for
different types of policies when a user select type A Insurance, system asks, age of that
user. The age value should be greater than 16 years and should be less than 80 years.

Prepare test case titles or scenarios:

Test Case 1: Verify the selection of type A insurance


Test Case 2: Verify focus on age when users select type A insurance
Test Case 3: Verify age value

BVA (Boundary value analysis) (Size)

Min --------- 17 chars----- pass


Max--------- - 79 chars----- pass
Min-1-------- -16 chars------fail
Min+1---------18 chars----- pass
Max-1---------78 chars------pass
Max+1------- 80 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
A–Z
a- z
0–9 Special Characters
Blank Fields

Functional Specification – 3:
A door opened when a person comes to in front of the
door and the door closed when that person comes to inside.

Prepare test case titles or scenarios:

Test Case 1: Verify door open operation

Decision Table

27
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
28

Person Door Criteria

Present Open Pass


Present Closed Fail
Absent Open Fail
Absent Closed Pass

Test Case 2: Verify door close operation

Decision Table

Person Door Criteria

Inside Open Fail


Inside Closed Pass

For outside we have possibilities, but we covered in above test case.

Test Case 3: Verify door operation when that person is standing in the middle of the door

Functional Specification – 4:
A computer shut down operation

Prepare test case titles or scenarios:

Test Case 1: Verify shutdown option using start menu

Test Case 2: Verify shutdown option using Alt + F4

Test Case 3: Verify shutdown operation

Test Case 4: Verify shutdown operation when a process is in running

Test Case 5: Verify shutdown operation using power off button

Functional Specification – 5:
In a Shopping application users are purchasing
different types of items. In this purchase order our system is allowing user to select item
no. and to enter quantity up to 10 this purchase order returns total amount along with one
item price.

Prepare test case titles or scenarios:

Test Case 1: Verify the selection of item number

Test Case 2: Verify quantity value

28
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
29

BVA (Boundary value analysis) (Size)

Min --------- 1 chars----- pass


Max--------- -10 chars----- pass
Min-1-------- -0 chars------fail
Min+1---------2 chars----- pass
Max-1---------9 chars------pass
Max+1------- 11 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
A–Z
a- z
0–9 Special Characters
Blank Fields

Test Case 3: Verify calculation such as total = price * Qty

Functional Specification 6:
Washing machine operation

Prepare test case titles or scenarios:

Test Case 1: Verify power supply


Test Case 2: Verify door open
Test Case 3: Verify water filling with detergent
Test Case 4: Verify clothes filling
Test Case 5: Verify door closing
Test Case 6: Verify door closing due to clothes overflow
Test Case 7: Verify washing settings
Test Case 8: Verify washing operation
Test Case 9: Verify washing operation with improper power supply (low voltage)
Test Case 10: Verify washing operation with clothes overload inside
Test Case 11: Verify washing operation with door open in middle of the process
Test Case 12: Verify washing operation with lack of water
Test Case 13: Verify washing machine with water leakage
Test Case 14: Verify washing operation with improper settings
Test Case 15: Verify washing operation with machinery problems

Functional Specification 7:
In an E – Banking application, users are connecting to
bank server through Internet connection. In this application users are filling below fields to
connect to bank server.
Password: 6 digits number
Area code: 3 digits number and optional

29
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
30

Prefix: 3 digits number but does not start with 0 and 1


Suffix: 6 digits alpha numeric
Commands: Cheque deposit, money transfer, mini statement, bills pay.

Prepare test case titles or scenarios:

Test Case 1: Verify password value

BVA (Boundary value analysis) (Size)

Min=Max --------------6 chars----- pass


Min=Max-1--------- -- 5 chars----- fail
Min =Max+1-------- -- 7 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
A–Z
a- z
0–9 Special Characters
Blank Fields

Test Case 2: Verify area code value

BVA (Boundary value analysis) (Size)

Min=Max --------------3 chars----- pass


Min=Max-1--------- -- 2 chars----- fail
Min =Max+1-------- -- 7 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
A–Z
a- z
0–9 Special Characters
Blank Fields

Test Case 3: Verify prefix value

BVA (Boundary value analysis) (Size)

Min --------- 200 chars----- pass


Max--------- - 999 chars----- pass
Min-1-------- -199 chars------fail

30
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
31

Min+1---------201 chars----- pass


Max-1---------998 chars------pass
Max+1------- 1000 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
A–Z
a- z
0–9 Special Characters
Blank Fields

Test Case 4: Verify suffix value

BVA (Boundary value analysis) (Size)

Min=Max --------------6 chars----- pass


Min=Max-1--------- -- 5 chars----- fail
Min =Max+1-------- -- 7 chars------fail

ECP (Equivalence class partition) (Type)

Valid Invalid
A–Z
a- z
0–9 Special Characters
Blank Fields

Test Case 5: Verify connection to Bank Server

Field Values Criteria

All are valid values Pass


Any one is invalid value Fail
Any one is blank except area code Fail
All are valid & Area code is blank pass

Functional Specification 8:
A computer restart operation

Functional Specification 9:
Money with drawl from ATM machine

31
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
32

Test Case 1: Verify card insertion


Test Case 2: Verify card insertion in wrong angle or improper
Test Case 3: Verify card insertion with improper account
Test Case 4: Verify pin number entry
Test Case 5: Verify operation when you entered wrong pin number 3 times
Test Case 6: Verify language selection
Test Case 7: Verify account type selection
Test Case 8: Verify operation when you selected invalid account type w.r.p.t that inserted
card
Test Case 9: Verify withdrawal option selection
Test Case 10: Verify amount entry
Test Case 11: Verify with drawl operation correct amount, right receipt and able to take
back the card
Test Case 12: Verify with drawl operation with wrong denominations in amount
Test Case 13: Verify withdrawal operation when our amount > possible balance
Test Case 14: Verify with drawl operation due to lack of amount in ATM
Test Case 15: Verify with drawl operation when our amount is > day limit
Test Case 16: Verify with drawl operation when our current transaction number > day
limit on number of transactions
Test Case 17: Verify withdrawal operation when we have network problem
Test Case 18: Verify cancel after insertion of card
Test Case 19: Verify cancel after entry of pin number
Test Case 20: Verify cancel after selection of language
Test Case 21: Verify cancel after selection of account type
Test Case 22: Verify cancel after entry of amount.

Test Case Documentation Format:


After completion of test case titles or
scenarios selection, test engineers are documenting the test case with complete
information. In this test case documentation, test engineers are using IEEE-829
formats.

Format:

1. Test Case ID: Unique No. or name

2. Test Case Name: The title or scenario of corresponding test case.

3. Feature to be Tested: Corresponding module or function or service

4. Test Suite Id: The name of test batch, in this batch our test case is a member
(Dependent group of member)

5. Priority: The importance of test case in terms of functionality

32
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
33

P0 – Basic Functionality (Functionality of projects


P1 – General Functionality (Compatibility, reliability, performance…)
P2 – Cosmo tic Functionality (Usability of projects)

6. Test Environment: The required hardware’s & software’s to execute the test case
on our application build.

7. Test Effort: Expected time to execute the test case on build. (ISO – Standards)
E.g. 20 minutes is an average time (manually) by using tool 5 min
8. Test Duration: Approximate Date & Time.

9. Pre condition (or) Test set up: Necessary tasks to do before start the test case
execution

10. Test Procedure (or) Data Matrix:

Format for Test Procedure


Step No. Action I/P Expected Actual Result Defect
Required

During Test Design During Test Execution

Format for Data Matrix


Input ECP Type BVA
Object (Size/Range)
Valid Invalid Min Max

11. Test Case Pass (or) Fail Criteria: When this case is passed & when this case is
failed.

Note:

1. Above 11 fields test case format is not mandatory because some field’s values are
common to maximum test cases & some field’s values are easy to remember or
derive.

2. Generally the test cases are covering objects and operations (more than one object).
If our test case is covering an object input values then test engineers are preparing
Data Matrix.

33
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
34

3. If our test case is covering an operation or function then test engineers are preparing
Test Procedure from Base-State to End-State.

Functional Specification: 10
A login process is allowing a User ID & Password to authorized users.User id is taking
alpha numeric in lower case from 4 to 16 characters long. The password object is
taking alphabets in lower case from 4 to 8 characters long.

Prepare Test Case Document -1

1. Test Case ID: Tc_login_ourname_1 (All capital letters)


2. Test Case Name: Verify user id
3. Test Suite ID: Ts_login
4. Priority: P0
5. Precondition: User id object is taking values from key board
6. Data Matrix:

Input ECP BVA


Object
Valid Invalid Min Max
User id a-z, 0-9 A-z, Special 4 16
Characters,
Blank Fields

Prepare Test Case Document -2

1. Test Case ID: Tc_login_ourname_2 (All capital letters)


2. Test Case Name: Verify password
3. Test Suite ID: Ts_login
4. Priority: P0
5. Precondition: Password object is taking values from key board
6. Data Matrix:

Input ECP BVA


Object
Valid Invalid Min Max
Password a-z 0-9, A-z, 4 8
Special
Characters,
Blank Fields

Prepare Test Case Document -3

34
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
35

1. Test Case ID: Tc_login_ourname_3 (All capital letters)

2. Test Case Name: Verify login operation

3. Test Suite ID: Ts_login

4. Priority: P0

5. Precondition: Registered user id & password available in hand (tester)

6. Data Matrix:

Step No. Action Input Required Expected


1 Focus to login None User id object
window focused
2 Fill fields “User id” & “ok” button enabled
“password”
3 Click “ok” Valid Valid Next message
Valid Invalid Error message
In valid Valid Error message
Valid Blank Error message
Blank Valid Error message

2. Use Cases Based Test Case Design:


Other alternative method for test cases
selection is “use cases based” test case design. This method is referable to outsourcing
testing companies. Generally the maximum testing people are preparing test cases
depending “Functional & System Specifications” in corresponding project SRS.
Some times the testing people are preparing test cases depending on use cases also.
“Use cases” are more elaborative and more understandable than functional and
system specifications.

BRS BRS

SRS SRS
Test Cases Use cases

HLD & LLD’s


HLD & LLD”S

35
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
36

Coding
Coding

(Functional & System Specification (Use Cases Based Test Case Design)
Test Case Design)

From the above diagrams test team is receiving “use cases” from project
management. To prepare test cases, every use case is describing functionality with all
required information. Every use case is following a standard format, unlike theoretical
functional specification.

Formats:
1. Use Case Name: The name of use case for future reference

2. Use Case Description: Summary of functionality

3. Actors: Names of actors which are participating in corresponding functionality

4. Related Use Cases: Names of related use cases, which have dependency with this
use case

5. Preconditions: List of necessary tasks to do beforestart this functionality testing in


our project

6. Activity Flow Diagram: The graphical notation of corresponding functionality

7. Primary Scenarios: A step by step actions to perform corresponding functionality

8. Alternative Scenario’s: Alternative list of actions to perform same functionality

9. Post Conditions: It specifies the exit point of corresponding functionality.

10. U. I. Make up: Model screen or prototype

11. Special Requirements: List of rules to be following if possible.

36
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
37

Conclusion:
From the above use case format, project management is providing
every functionality documentation with complete details. Depending on that use cases,
test engineers are preparing test case using IEEE-829 Format

4. Application Build Based Test Case Design:


Generally the test engineers
are preparing test cases depending on functional & system specifications” or “use
cases”. After completion of maximum test cases selection, test engineers are preparing
some test cases depending on application build, which received from developer team.
These new test cases are only concentrating on usability of the screens in our
application build. These test cases are covering

1. Ease of use
2. Look & Feel
3. Speed in Interface
4. User manuals correctness (Help Documents)

Example Test Cases:


Test Case-1: Verify spelling in every screen.
Test Case-2: Verify contrast of each object in every screen
Test Case-3: Verify alignment of objects in every screen.
Test Case-4: Verify color commonness in all screens
Test Case-5: Verify font commonness in all screens
Test Case-6: Verify size commonness in all screens
Test Case-7: Verify functionality-grouped objects in screens
Test Case-8: Verify boarders of functionality grouped objects.
Test Case-9: Verify tool tips (e.g. Messages about icons in screens)
Test Case-10: Verify the place of multiple data objects in screens. (E.g. list boxes,
combo boxes, and table grids, active x controls, menus…)
Test Case-11: Verify scroll bar.
Test Case-12: Verify labels of objects in every screen as init-cap
Test Case-13: Verify keyboard accessing in your application build
Test Case-14: Verify abbreviations in all screens (E.g. short cuts)
Test Case-15: Verify information repetition in screens
Test Case-16: Verify help documents (Help menu contents)

Note:
Generally the test engineers are preparing maximum test depending on functional
& system specifications in SRS, the remaining test cases are prepared using application
build because the functional & system specifications are not providing complete
information about every small issue in our project.
Some times the testing people are using “Use Cases” instead of functional
& system specification in SRS.

37
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
38

Review Test Cases:


After completion of test cases selection and documentation, test
lead is conducting a review meeting along with test engineers. In this review test lead is
concentrating on the completeness and correctness of test engineers prepared test cases. In
this coverage analysis test lead is using two type of factors.

Requirement based test cases coverage


Testing Technique based test cases coverage

After completion of this review meeting, test engineers are


concentrating on test execution.

4. Test Execution:
In test execution, test engineers are concentrating on test cases
execution and defect reporting and tracking. In this stage the testing team is conducting
a small meeting with development team for version controlling and establishment of
test environment.

1. Version Control:
During test execution development people are assigning
unique version no. To software builds after performing required changes. This version
numbering system is understandable to testing people.

For this build version controlling the development people are using version
control software’s
Ex. VSS – Visual source safe

2. Levels of Test Execution:

Development Testing
Initial build

38
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
39

Stable build
Level-0 (sanity / smoke)

Defect reporting
Bug fixing Level-1 (comprehensive / real)

Modified build
Bug resolved Level-2 (regression)

Level-3(final regression/post mart)

3. Levels of Test Execution V/s Test Cases:

Level – 0 (Initial Build) – Selected test cases for basic functionality


(Sanity/Smoke Testing)
Level – 1 (Stable Build) – All test cases in order to detect defect.
(Comprehensive Testing)
Level – 2 (Modified Build) – Selected test cases w.r.p.t modifications.
(Regression Testing)
Level – 3 (Master Build) – Selected test cases w.r.p.t bug density

After that Golden Build (ready to UAT) released to customer.

1. Level - 0 (Sanity / Smoke Testing): Generally the testing people are starting test
execution with level – 0 testing. It is also known as Sanity / Smoke Testing or
Tester Acceptance Testing (TAT) or Build Verification Testing or Testability
Testing.

In this testing level, test engineers are concentrating on below 8 factors through operating
corresponding initial build

 Understandable
 Opera table
 Observable
 Controllable
 Consistency
 Simplicity
 Maintainable
 Automat able

Operation + Observation = Testing

Programmer: Expect logic & develop functionality

39
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
40

Tester: Expecting customer requirement

2. Level -1 (Comprehensive Testing):


After receiving stable build from development
team, test engineers are executing all test cases sequentially either in manual or in
automation.
In manual test execution, test engineer is comparing test cases specified
expected values, and build specify actual values. In this test execution, test engineers
are preparing “test log” document.This document consists of 3 types of entries.

Passed: All expected values of the test case are equal to all actual values of that build.
Failed: Any one expected value is variation with any one actual value of that build
Blocked: Dependent test cases execution post phoned to next cycle (After modified
build) Due to wrong parent functionality.

Level-1 (Comprehensive test cycle)

Level -2 (Regression Testing):


During above level-1 comprehensive testing, testing
people are reporting mismatches between test casesexpected and build actual to
development team as “defect report”. After reviewing and resolving the defect,
development people are releasing modified build to testing team. In this stage, a
development person is releasing “release note” also. The responsible test engineers are
studying that release note and try to understand modifications in that modify build and
then test engineers are concentrating on regression testing to ensure that modifications.

Level-0

Level-1

40
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
41

Check in  Level-2  Check-out


(Regression)

From the above diagram, test engineers are conducting regression testing
on modified build w.r.p.t modifications, which are mentioned in “release note”.

Study release note & consider severity of resolved bug

High Medium Low

all P0 (priority) all P0 some P0


all P1 max P 1 some P1
max P2 test cases some P2 test cases some P2 test cases

On modified build

Case1: If the development team resolved bug severity is high then test engineers are
re-executing all P0, all P1 and carefully selected maximum P2 test cases on that
modified build w.r.p.t modifications mentioned in release note

Case2: If the development team resolved bug severity is medium then test engineers
are re-executing all P0 carefully selected all P1 and some of P2 test cases.

Case3: If the development team resolved bug severity is low then test engineers are re-
executing carefully selected some P0, P1, P2 test cases.

Case4: If testing team received modified build due to sudden changes in customer
requirements, then test engineers are re-executing all P0, all P1, max P2 test cases.

41
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
42

5. Test Reporting:
During level –1 & level –2, test execution, test engineers are
reporting miss matches in between test case expected values and build actual values
as defect report to development team.
In this test reporting, development people are receiving defect report from
testing team in a standard format. This format followed by every test engineer in
test execution to report defects.

IEEE-829 Defect Report Format:

1. Defect ID: Unique no./name for future reference.

2. Description: Summary about defect.

3. Build Version: The version no. of current build, in this build test engineers
detected this defect.

4. Feature: The name of module / function, in that area test engineers found this
defect.

5. Test Case Name: The name of failed test case, in that case execution test
engineer found this defect.

6. Status: New reporting first time, Re-open Re-reporting.

7. Re-producible: Yes Defect appears every time in test case execution, No
Defect appears rarely in test case execution.

8. If Yes: Attach test procedure.

9. If No: Attach snapshot and strong reasons.

10. Severity: The seriousness of defect in terms of functionality.

High  Not able to continue testing with out resolving this defect
(show stopper)

Medium  Able to continue remaining testing, but mandatory resolve.

42
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
43

Low  Able to continue remaining testing and May / May not to


resolve.
11. Priority: Importance of defect to resolve in terms of customer (High, Medium,
Low)

12. Detected by: Name of test engineer.

13. Assigned To: The name of responsible person at development side to receive
this defect report.

14. Suggested Fix (Optional): Reasons to accept and resolve this defect.

Resolution Type:
After receiving defect report from testing team, the
responsible development people are conducting review meeting and sending
resolution type to the responsible testing team.
There are 12 types of resolutions, they are

1. Enhancement: The reported defect is rejected, because this defect related to


future requirements of the customer.

2. Duplicate: The reported defect is rejected, because this defect raised due to
limitations of hardware devices.

3. Hardware Limitations: The reported defect is rejected, because this defect


raised due to limitations of hardware devices.

4. Software Limitations: The reported defect is rejected, because this defect is


raised due to limitations Software Technologies (Ms-Access).

5. Not Applicable: The reported defect is rejected because these defects have
improper meaning.

6. Functions as Designed: The reported defect is rejected, because the coding is


correct w.r.p.t design documents.

7. Need More Information: The reported defect is not accepted / not rejected but
the developers are requiring more information about the defect to understand.

8. Not Re-producible: The reported defect is not accepted & not rejected, but the
developers are requiring correct procedure to reproduce that defect.

9. No Plan To Fix It: The reported defect is not accepted & not rejected, but the
development people are requiring some extra time.

10. Open: The reported defect is accepted & the development people are ready to
resolve through changes in coding.

43
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
44

11. Deferred: The reported defect is accepted, but postponed to future release, due
to low severity & low priority.

12. User Direction: The reported defect is accepted, but developers are providing
some valid information about that defect to customer site people, through our
application build screens.

Defect Tracking Procedure


Large Scale Organizations

1 to 5 --- (Defect reporting)


6 to 10--- (Resolution Type)

Test Manager Project Manager

Test Lead Team Lead

Test Engineer Programmer

Small / Medium Scale Organizations

1 to 4 -------- (Defect Reporting)


5 to 8 -------- (Resolution Type)

P.M

Test Lead Team Lead

Test Engineer Programmer

44
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
45

Bug Life Cycle / Defect Life Cycle


New

Deferred

Open Reject

Closed Re-open

New Open Closed

New Open Reopen Closed

New Reject Closed

New Reject Reopen Closed

New Deferred

Types of Defects
Generally the Black Box Testing techniques is finding below type of
defects during system testing such as

 User Interface Defects


 Boundary Related Defects
 Error Handling Defects
 Calculations Defects
 Race Condition Defects
 Load Condition Defects
 Hard Ware Related Defects
 ID – Control Bugs
 Version Control Bugs
 Source Bugs

Note 1:
Generally the Test Engineer’s are deciding severity & priority of defect
during reporting but the priority of defect is modifiable by development team

45
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
46

Note 2:
Generally the development people are postponing / differing low severity &
low priority defects.

L-S: Low Severity


M-S: Medium Severity
H-S: High Severity
L-P: Low Priority
H-P: High Priority

1. User Interface Bugs / Defects : (L-S)


E.g.1. Spelling mistake (L-S & H-P) Seriousness is low but importance is
high priority.
2. Improper Right Alignment (L-S & L-P)
2. Boundary Related Bugs: (M-S)
E.g.1. One object is not taking valid type of values as input (M-S & H-P)
2. One object is taking invalid type also (M-S & L-P)
3. Error Handling Bugs: (M-S)
E.g.1. Does not return error message to prevent wrong operations on build
(M-S & H-P)
2. Returns error message but complex to understand (M-S & L-P)
4. Calculations Bugs: (H-S)
E.g.1. Dependent output is wrong (application show stopper) (H-S & H-P)
2. Final output is wrong (Module show stopper) (H-S & L-P)
5. Race Condition Bugs: (H-S)
E.g. 1. Dead lock or Hang (Application Show Stopper) (H-S & H-P)
2. Does not run on other customer expected platforms (H-S & L-P)
6. Load Condition Bugs: (H-S)
E.g. 1. Does not allow multiple users (Application Show Stopper) (H-S &
H-P)
2. Does not allow customer expected load (H-S & L-P)
7. Hardware Related Bugs: (H-S)
E.g.1. Does not activate required hardware device (Application Show
Stopper) (H-S & H-P)
2. Does not support all customer expected hardware technologies (H-
S & L-P)
8. ID-Control Bugs: (M-S)
E.g.1. Wrong logo, logo missing, copy right windows mixing, wrong
version no. , development and testing people names missing.
9. Version Control Bugs: (M-S)
E.g.1. Invalid differences in between old build version and current build
version.
10. Source Bugs: (M-S)
E.g.1. Mistakes in help documents.

6. Test Closure:
46
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
47

After completion of all reasonable cycles of test execution, test


lead is concentrating on test closure to estimate completeness and correctness of test
execution and bugs resolving. In this review meeting the test lead is considering some
factors to review testing team responsibilities.

1. Coverage Analysis:

1. Requirements or module coverage (All modules coverage)


2. Testing techniques coverage (All requirement techniques covered on each
module)

2. Defect Density:

Modules No. of Defects

A 20 %
B 20 %
C 40 % (Need for regression)
D 20 %

3. Analysis of deferred Defects:


Whether the deferred defects are postponable or
not? After completion of above closer review testing team is concentrating on
postmortem testing / final regression testing / Pre-acceptance testing / level-3
testing if required.

Select high
Defect density
Modules
Effort estimation

Test reporting

Plan regression

Regression Testing

Final Regression / Post Mortem / Level -3 Testing

7. User Acceptance Testing:


After completion of testing and their review,
project management is concentrating on user acceptance testing to collect feed back

47
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
48

from real customers / model customers. There are two ways to conduct user acceptance
testing such as Alpha Testing and Beta Testing

8. Sign Off:
After completion of user acceptance testing and their modifications,
project management is defining release team & C.C.B (Change Control Board).
In both terms few developers & test engineers are involving along with project
manager. In this sign off stage testing team is submitting all prepared testing
documents to project management.

Final Test Summary Report:

 Test Strategy
 Test Plan
 Test Case Titles (Scenarios)
 Test Case Documents
 Test Log
 Defect Reports

The above all documents combination is also known as “Final Test


Summary Report”

Requirements traceability Matrix (RTM):


It is also a document. This document
creation & up dating are doing by test lead. This document defines a complete
picture of testing process from test planning to test closure.

Requirement Test Cases Pass / Failed Defect ID Closed /


ID ID Deferred
Requirement Tc1 Passed - -
1 (Modules / Tc 2 Passed - -
Functionality) Tc 3 Passed - -
Tc 4 Failed D1 Closed

- - - -

- - - - -

- - - - -
- - - - -

Above matrix is also known as requirement validation matrix (R.T.M / R.V.M)

Testing Measurements and Metrics:


Measurement means a basic unit and
metric means a compound unit. In system testing project management and test management
are using 3 types of measurement and metrics.

48
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
49

1. Quality Assessment Measurements (QAM):


These measurements used by
project manager or test manager during testing (monthly once)
a. Stability:

Time

Left Side Arrow: No. of defects


Right Side up Arrow: Defect Arrival Rate

Duration (Testing) No. of defects

20 % 80 %
80 % 20 %
20-80/80-20 rule

b. Sufficiency:
 Requirements Coverage
 Testing Techniques Coverage
Base on this our PM takes decision on testing time is sufficient
or not?

c. Defect Severity Distribution:


Organization trend limit check. (Some
times defect severity is maintaining depends on organization trend.)
3. Test Management Measurements:
This Measurements used by test lead category
people during testing (weekly once)
a. Test Status:
 No. of test cases executed and their outputs
 No. of test cases in execution
 No. of test cases yet to execute.
b. Delays in Delivery:
 Defect arrival rate (Testing team)
 Defect resolution rate (Developers team (Strong reason for
accept / reject))
 Defect ageing (Time gap in between D.A.R & D.R.R)

49
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
50

c. Test Efficiency: (Salary increment / hikes depends on this)


 No. of defects detected / person – day
 No. of test cases prepared / person – day
 No. of test cases executed / person - day

4. Process Capability Measurements:


These measurements used by P.M &
T.M to improve testing team effort.

a. Test Effectiveness:
 Requirements coverage
 Testing techniques coverage

b. Defect Escapes:
 Type phase analysis (What type of defect, in what phase).
 Defect removal efficiency = A / A+B

(A = No. of defects found on testing


A+B = No. of defects faced by customer)

A  No. of defects detected by testing team


B  No. of bugs faces by customer site people.

Explain testing process of your company?

In my Company the Testing process starts with “TEST INITIATION”. In this stage the
Project Manager prepares Test Methodology for corresponding Project. He decides
reasonable tests depending upon requirements and releases the test strategy
document to Test Lead. To prepare the “TEST PLAN” My Test Lead studies the BRS,
SRS, Design Docs, development Plan and Test strategy. He goes to HR Manager to talk
about team formation. After completion of Testing Team formation, Risks analysis he
prepares the complete “TEST PLAN”. The Test Lead decides the schedule of the
different tests i.e. what to Test? When to Test? How to Test? Who to Test? After
completion of test plan and review he will take the approval from Project Manager and
provide training to selected test engineers.

After completion of required training people like me are concentrate on Test


Cases outline. Based on the outlines we prepare in-depth Test Case documents. After
receiving the build from developers we perform sanity testing, comprehensive
testing and regression testing. In sanity testing level, we are concentrating on
below 8 factors through operating corresponding initial build

 Understandable
 Opera table
 Observable
 Controllable
 Consistency
 Simplicity

50
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551
51

 Maintainable
 Automat able

In comprehensive testing level, after receiving stable build from development


team we execute all test cases sequentially either in manual or in automation.
In manual test execution, we are comparing test cases specified expected values,
and build specify actual values. In this test execution, we are preparing “test log”
document. This document consists of 3 types of entries.

Passed: All expected values of the test case are equal to all actual values of that
build.
Failed: Any one expected value is variation with any one actual value of that build
Blocked: Dependent test cases execution post phoned to next cycle (After modified
build) Due to wrong parent functionality.

We report the defects in Excel format. After modifications the Development


people release the modified build. In that we conduct the regression testing.

In regression testing level, to make sure all the fixed bugs are correct and there
are no side effects i.e. old functionalities will not affect with the new changes after
completion of all reasonable tests and defects closed. The management concentrates
on User Acceptance testing. In this testing level they collect the feed back from
real or model customers. After completion of UAT my test lead will prepare the final
test summary report. This report is submitted to customers by TL or PM.

51
Naga Durga Rao Koya nagkoya@gmail.com M: 9989093551

You might also like