You are on page 1of 132

Software Testing Material

Software Testing: Testing is a process of executing a program with the intent of finding
error.

Software Engineering: Software Engineering is the establishment and use of sound


engineering principles in order to obtain economically software that is more reliable and
works efficiently on real machines.

Software engineering is based on Computer Science, Management Science, Economics,


Communication Skills and Engineering approach.

What should be done during testing?


Confirming product as
• Product that has been developed according to specifications
• Working perfectly
• Satisfying customer requirements

Why should we do testing?


• Error free superior product
• Quality Assurance to the client
• Competitive advantage
• Cut down costs

How to test?
Testing can be done in the following ways:
• Manually
• Automation (By using tools like WinRunner, LoadRunner, TestDirector …)
• Combination of Manual and Automation.

Software Project: A problem solved by some people through a process is called a project.

Information Gathering – Requirements Analysis – Design – Coding – Testing – Maintenance:


Are called as Project

Software Project

Problem Process Product

Software Development Phases:

Information Gathering: It encompasses requirements gathering at the strategic business


level.

Planning: To provide a framework that enables the management to make reasonable


estimates of

Page 1 of 132
Software Testing Material

• Resources
• Cost
• Schedules
• Size

Requirements Analysis: Data, Functional and Behavioral requirements are identified.

• Data Modeling: Defines data objects, attributes, and relationships.


• Functional Modeling: Indicates how data are transformed in the system.
• Behavioral Modeling: Depicts the impact of events.

Design: Design is the engineering representation of product that is to be built.

• Data Design: Transforms the information domain model into the data structures that
will be required to implement the software.
• Architectural design: Relationship between major structural elements of the software.
Represents the structure of data and program components that are required to build a
computer based system.
• Interface design: Creates an effective communication medium between a human and
a computer.
• Component level Design: Transforms structural elements of the software architecture
into a procedural description of software components.

Coding: Translation into source code (Machine readable form)

Testing: Testing is a process of executing a program with the intent of finding error

• Unit Testing: It concentrates on each unit (Module, Component…) of the software as


implemented in source code.
• Integration Testing: Putting the modules together and construction of software
architecture.
• System and Functional Testing: Product is validated with other system elements are
tested as a whole
• User Acceptance Testing: Testing by the user to collect feed back.

Maintenance: Change associated with error correction, adaptation and enhancements.

• Correction: Changes software to correct defects.


• Adaptation: Modification to the software to accommodate changes to its external
environment.
• Enhancement: Extends the software beyond its original functional requirements.
• Prevention: Changes software so that they can be more easily corrected, adapted and
enhanced.

Business Requirements Specification (BRS): Consists of definitions of customer


requirements. Also called as CRS/URS (Customer Requirements Specification / User
Requirements Specification)

Page 2 of 132
Software Testing Material

Software Requirements Specification (S/wRS): Consists of functional requirements to


develop and system requirements(s/w & H/w) to use.

Review: A verification method to estimate completeness and correctness of documents.

High Level Design Document (HLDD): Consists of the overall hierarchy of the system in
terms of modules.

Low Level Design Document (LLDD): Consists of every sub module in terms of Structural
logic (ERD) and Backend Logic(DFD)

Prototype: A sample model of an application without functionality is called as


prototype(Screens)

White Box Testing: A coding level testing technique to verify completeness and correctness
of the programs with respect to design. Also called as Glass BT or Clear BT

Black Box Testing: It is a .exe level of testing technique to validate functionality of an


application with respect to customer requirements. During this test engineer validate internal
processing depends on external interface.

Grey Box Testing: Combination of white box and black box testing.

Build: A .Exe form of integrated module set is called build.

Verification: whether system is right or wrong?

Validation: whether system is right system or not?

Software Quality Assurance(SQA): SQA concepts are monitoring and measuring the
strength of development process.
Ex: LCT (Life Cycle Testing)

Quality:
• Meet customer requirements
• Meet customer expectations (cost to use, speed in process or performance, security)
• Possible cost
• Time to market

For developing the quality software we need LCD and LCT

LCD: A multiple stages of development stages and the every stage is verified for
completeness.

V model:

Build: When coding level testing over. it is a completely integration tested modules. Then it
is called a build. Build is developed after integration testing. (.exe)

Page 3 of 132
Software Testing Material

Test Management: Testers maintain some documents related to every project. They will
refer these documents for future modifications.
Assessment of Development Plan
Prepare TestPlan
Information Gathering Requirements Phase Testing
& Analysis

Design Phase Testing


Design and Coding Program Phase Testing (WBT)

Functional & System Testing


Install Build User Acceptance Testing
Test Environment Process

Port Testing
Maintenance Test Software Changes
Test Efficiency

Port Testing: This is to test the installation process.

Change Request: The request made by the customer to modify the software.

Defect Removel Efficiency:


DRE= a/a+b.
a = Total no of defects found by testers during testing.
b = Total no of defects found by customer during maintenance.

DRE is also called as DD(Defect Deficiency).

BBT, UAT and Test management process where the independent testers or testing team will
be involved.

Refinement form of V-Model: Due to cost and time point of view v-model is not applicable
to small scale and medium scale companies. This type of organizations are maintaining a
refinement form of v-model.

Page 4 of 132
Software Testing Material

BRS/URS/CRS User Acceptance Testing

S/wRS Functional & System Testing

HLDD Integration Testing

LLDD Unit Testing

Code

Fig: Refinement Form of V-Model

Development starts with information gathering. After the requirements gathering


BRS/CRS/URS will be prepared. This is done by the Business Analyst.

During the requirements analysis all the requirements are analyzed. at the end of this phase
S/wRS is prepared. It consists of the functional (customer requirements) + System
Requirements (h/w + S/w) requirements. It is prepared by the system analyst.

During the design phase two types of designs are done. HLDD and LLDD. Tech Leads will
be involved.

During the coding phase programs are developed by programmers.

During unit testing, they conduct program level testing with the help of WBT techniques.

During the Integration Testing, the testers and programmers or test programmers integrating
the modules to test with respect to HLDD.

During the system and functional testing the actual testers are involved and conducts tests
based on S/wRS.

During the UAT customer site people are also involved, and they perform tests based on the
BRS.

From the above model the small scale and medium scale organizations are also conducts life
cycle testing. But they maintain separate team for functional and system testing.

Reviews during Analysis:


Quality Analyst decides on 5 topics. after completion of information gathering and analysis a
review meeting conducted to decide following 5 factors.

Page 5 of 132
Software Testing Material

1. Are they complete?


2. Are they correct? Or Are they right Requirements?
3. Are they achievable?
4. Are they reasonable? ( with respect to cost & time)
5. Are they testable?

Reviews during Design:


After the completion of analysis of customer requirements and their reviews, technical
support people (Tech Leads) concentrate on the logical design of the system. In this every
stage they will develop HLDD and LLDD.

After the completion of above like design documents, they (tech leads) concentrate on review
of the documents for correctness and completeness. In this review they can apply the below
factors.

• Is the design good? (understandable or easy to refer)


• Are they complete? (all the customer requirements are satisfied or not)
• Are they correct? Are they right Requirements? (the design flow is correct or not)
• Are they follow able? (the design logic is correct or not)
• Does they handle error handling? ( the design should be able to specify the positive
and negative flow also)

User Information

User Logi Inbox


n

Invalid User

Unit Testing:
After the completion of design and their reviews programmers are concentrating on coding.
During this stage they conduct program level testing, with the help of the WBT techniques.
This WBT is also known as glass box testing or clear box testing.

WBT is based on the code. The senior programmers will conduct testing on programs WBT
is applied at the module level.

There are two types of WBT techniques, such as

1. Execution Testing
 Basis path coverage (correctness of every statement execution.)
 Loops coverage (correctness of loops termination.)

Page 6 of 132
Software Testing Material

 Program technique coverage (Less no of Memory Cycles and CPU


cycles during execution.)

2. Operations Testing: Whither the software is running under the customer expected
environment platforms (such as OS, compilers, browsers and etc…sys s/w.)

Integration Testing: After the completion of unit testing, development people concentrate
on integration testing, when they complete dependent modules of unit testing. During this test
programmers are verifying integration of modules with respect to HLDD (which contains
hierarchy of modules).

There are two types of approaches to conduct Integration Testing:

• Top-down Approach
• Bottom-up approach.

Stub: It is a called program. It sends back control to main module instead of sub module.
Driver: It is a calling Program. It invokes a sub module instead of main module.

Top-down: This approach starts testing, from the root.

Mai
n Stub

Sub Sub
Module1 Module2

Bottom-Up: This approach starts testing, from lower-level modules. drivers are used to
connect the sub modules. ( ex login, create driver to accept default uid and pwd)

Mai
n
Driver

Sub
Module1

Sub
Module2

Page 7 of 132
Software Testing Material

Sandwich: This approach combines the Top-down and Bottom-up approaches of the
integration testing. In this middle level modules are testing using the drivers and stubs.

Mai
n
Driver

Sub
Module1
Stub

Sub Sub
Module2 Module3

System Testing:
• Conducted by separate testing team
• Follows Black Box testing techniques
• Depends on S/wRS
• Build level testing to validate internal processing depends on external interface
processing depends on external interface
• This phase is divided into 4 divisions
After the completion of Coding and that level tests(U & I) development team releases a
finally integrated all modules set as a build. After receiving a stable build from
development team, separate testing team concentrate on functional and system testing with
the help of BBT.

This testing is classified into 4 divisions.

• Usability Testing (Ease to use or not. Low level Priority in Testing)


• Functional Testing (Functionality is correct or not. Medium Priority in Testing)
• Performance Testing (Speed of Processing. Medium Priority in Testing)
• Security Testing (To break the security of the system. High Priority in Testing)

Usability and System testing are called as Core testing and Performance and Security Testing
techniques are called as Advanced testing.

Usability Testing is a Static Testing. Functional Testing is called as Dynamic Testing.

From the testers point of view functional and usability tests are important.

Usability Testing: User friendliness of the application or build. (WYSIWYG.)


Usability testing consists of following subtests also.

Page 8 of 132
Software Testing Material

User Interface Testing


• Ease of Use ( understandable to end users to operate )
• Look & Feel ( Pleasantness or attractiveness of screens )
• Speed in interface ( Less no. of events to complete a task.)
Manual Support Testing: In general, technical writers prepares user manuals after completion
of all possible tests execution and their modifications also. Now a days help documentation is
released along with the main application.
Development Team releases Build

User Interface Testing.

Remaining System Testing


techniques like Functionality, System
Performance and Security Tests. Testing

Manual Support Testing.

Help documentation is also called as user manual. But actually user manuals are prepared
after the completion of all other system test techniques and also resolving all the bugs.

Functional testing: During this stage of testing, testing team concentrate on " Meet
Customer Requirements". For performing what functionality, the system is developed met or
not can be tested.

For every project functionality testing is most important. Most of the testing tools, which are
available in the market are of this type.

The functional testing consists of following subtests

System Testing

80 % Functional Testing

80 % Functionality / Requirements Testing

Functionality or Requirements Testing: During this subtest, test engineer validates


correctness of every functionality in our application build, through below coverage.
If they have less time to do system testing, they will be doing Functionality Testing only.

Page 9 of 132
Software Testing Material

Functionality or Requirements Testing has following coverages

• Behavioral Coverage ( Object Properties Checking ).


• Input Domain Coverage ( Correctness of Size and Type of every i/p Object ).
• Error Handling Coverage ( Preventing negative navigation ).
• Calculations Coverage ( correctness of o/p values ).
• Backend Coverage ( Data Validation & Data Integrity of database tables ).
• URL’s Coverage (Links execution in web pages)
• Service Levels ( Order of functionality or services ).
• Successful Functionality ( Combination of above all ).

All the above coverages are mandatory or must.

Input Domain Testing: During this test, the test engineer validates size and type of every
input object. In this coverage, test engineer prepares boundary values and equivalence classes
for every input object.

Ex: A login process allows user id and password. User id allows alpha numeric from 4-16
characters long. Password allows alphabet from 4-8 characters long.

Boundary Value analysis:


Boundary values are used for testing the size and range of an object.

Equivalence Class Partitions:


Equivalence classes are used for testing the type of the object.

Recovery Testing: This test is also known as Reliability testing. During this test, test
engineers validates that, whether our application build can recover from abnormal situations
or not.

Ex: During process power failure, network disconnect, server down, database disconnected
etc…

Abnormal

Backup & Recovery


Procedures

Normal
Recovery Testing is an extension of Error Handling Testing.

Page 10 of 132
Software Testing Material

Compatibility Testing: This test is also known as portable testing. During this test, test
engineer validates continuity of our application execution on customer expected
platforms( like OS, Compilers, browsers, etc..)
During this compatibility two types of problems arises like
1. Forward compatibility
2. Backward compatibility
Forward compatibility:
The application which is developed is ready to run, but the project technology or environment
like OS is not supported for running.

Buil OS
d

Backward compatibility:
The application is not ready to run on the technology or environment.

Buil OS
d
Configuration Testing: This test is also known as Hardware Compatibility testing. During
this test, test engineer validates that whether our application build supports different
technology i.e. hardware devices or not?
Inter Systems Testing: This test is also known as End-to-End testing. During this test, test
engineer validates that whither our application build coexistence with other existing software
in the customer site to share the resources (H/w or S/w).

WBAS
Water Bill Automation

Local EBAS
Electricity Bill Automation
Data
Base
Server TPBAS
Tele Phone Bill Automation

ITBAS
Income Tax Bill Automation

Sharable
Newly Added Component Resource New Server

Local ESeva Center Remote Servers

Page 11 of 132
Software Testing Material

Banking Information System

Bank Loans

The first example is one system is our application and other one is sharable.
The second example is same system but different components.

System Software Level: Compatibility Testing


Hardware Level: Configuration Testing
Application Software Level: Inter Systems Testing

Installation Testing: Testing the applications, installation process in customer specified


environment and conditions.

Build Server

Test Engineer Systems

Instal
Build 1. Setup Program
lation
+Required
Customer Site
S/w
Like 2. Easy Interface
components to
Environment
run
application 3. Occupied Disk Space

The following conditions or tests done in this installation process.

• Setup Program: Whither Setup is starting or not?


• Easy Interface: During Installation, whither it is providing easy interface or not ?
• Occupied Disk Space: How much disk space it is occupying after the installation?

Page 12 of 132
Software Testing Material

Sanitation Testing: This test is also known as Garbage Testing. During this test, test
engineer finds extra features in your application build with respect to S/w RS.
Maximum testers may not get this type of problems.

User Id

Password

Login Forgot Password

Parallel or Comparitive testing: During this test, test engineer compares our application
build with similar type of applications or old versions of same application to find
competitiveness.

This comparative testing can be done in two views:


• Similar type of applications in the market.
• Upgraded version of application with older versions.

Performance Testing: It is an advanced testing technique and expensive to apply. During


this test, testing team concentrate on Speed of Processing.

This performance test classified into below subtests.

1. Load Testing
2. Stress Testing
3. Data Volume Testing
4. Storage Testing

Load Testing:
This test is also known as scalability testing. During this test, test engineer
executes our application under customer expected configuration and load to estimate
performance.

Load: No. of users try to access system at a time.

This test can be done in two ways

1. Manual Testing. 2.By using the tool, Load Runner.

Stress Testing:
During this test, test engineer executes our application build under customer
expected configuration and peak load to estimate performance.

Data Volume Testing:


A tester conducts this test to find maximum size of allowable or
maintainable data, by our application build.

Page 13 of 132
Software Testing Material

Storage Testing:
Execution of our application under huge amounts of resources to estimate
storage limitations to be handled by our application is called as Storage Testing.
Trashing
Performance =

--
+

Resources
Security Testing: It is also an advanced testing technique and complex to apply.
To conduct this tests, highly skilled persons who have security domain knowledge.

This test is divided into three sub tests.

Authorization: Verifies authors identity to check he is a authorized user or not.

Access Control: Also called as Privileges testing. The rights given to a user to do a system
task.

Encryption / Decryption:
Encryption- To convert actual data into a secret code which may not be understandable to
others.
Decryption- Converting the secret data into actual data.

Source Encryption Decryption Destination

Client Server

Destination Decryption Encryption Source

User Acceptance Testing: After completion of all possible system tests execution, our
organization concentrate on user acceptance test to collect feed back.
To conduct user acceptance tests, they are following two approaches like Alpha (α) - Test and
Beta (β) -Test.

Note: In s/w development projects are two types based on the products like software
application ( also called as Project ) and Product.

Software Application ( Project ) : Get requirements from the client and develop the project.
This software is for only one company. And has specific customer. For this Alpha test will be
done.

Page 14 of 132
Software Testing Material

Product : Get requirements from the market and develop the project. This software may have
more than one company. And has no specific customer. For this β- Version or Trial version
will be released in the market to do Beta test.

Alpha Testing Beta Testing

For what software applications For software products.


applicable to specific customer
By customer site like people.
By real customer
In customer site like environment.
In development site
Real environment.
Virtual environment
Collect Feedback.
Collect Feedback.

Testing during Maintenance:


After the completion of UA Testing, our organization
concentrate on Release Team (RT) formation. This team conducts Port Testing in customer
site, to estimate completeness and correctness of our application installation.

During this Port testing Release team validate below factors in customer site:

• Compact Installation (Fully correctly installed or not)


• On screen displays
• Overall Functionality
• Input device handling
• Output device handling
• Secondary Storage Handling
• OS Error handling
• Co-existence with other Software

The above tests are done by the release team. After the completion of above testing, the
Release Team will gives training and application support in customer site for a period.

During utilization of our application by customer site people, they are sending some Change
Request (CR) to our company. When CR is received the following steps are done
Based on the type of CR there are two types,
1. Enhancement
2. Missed Defect

Page 15 of 132
Software Testing Material

Change Request

Enhancement Missed Defect

Impact Analysis CCB Impact Analysis


Developers
Perform that change Perform that change

Test that S/w Change Review old test process capability to


improve

Test that S/w Change


Tester

Change Control Board: It is the team which will handles customer requests for
enhancement changes.

Testing Stages Vs Roles:

Reviews in Analysis – Business Analyst / Functional Lead.


Reviews in Design – Technical Support / Technical Lead.
Unit Testing – Senior Programmer.
Integration Testing – Developer / Test Engineer.
Functional & System Testing – Test Engineer.
User Acceptance Testing – Customer site people with involvement of testing team.
Port Testing – Release Team.
Testing during Maintenance – Change Control Board

Testing Stages Roles


Reviews in Analysis - Business Analyst / Functional Lead.
Reviews in Design - Technical Support / Technical Lead.
Unit Testing - Senior Programmer.
Integration Testing - Developer / Test Engineer.
Functional & System Testing - Test Engineer.
User Acceptance Testing - Customer site people with involvement of Testing team.
Port Testing - Release Team.
Testing during Maintenance/ - Change Control Board
Test Software Changes

Testing Team:

From refinement form of V-Model small scale companies and medium scale companies are
maintaining separate testing team for some of the stages in LCT.
In their teams organisation maintains below roles

Quality Control: Defines the objectives of Testing


Quality Assurance: Defines approach done by Test Manager

Page 16 of 132
Software Testing Material

Test Manager: Schedule that approach


Test Lead: Maintain testing team with respect to the test plan
Test Engineer: Conducts testing to find defects

Quality Control

Quality Assurance

Project Manager Test Managers

Project Lead Test Lead

Programmers Test Engineer /


QA Engineer

Quality Control: Defines the objectives of Testing


Quality Assurance: Defines approach done by Test Manager
Test Manager: Schedule, Planning
Test Lead: Applied
Test Engineer: Followed

Testing Terminology:-

Monkey / Chimpanzee Testing: The coverage of main activities only in your application
during testing is called as monkey testing.(Less Time)

Gerilla Testing: To cover a single functionality with multiple possibilities to test is called
Gerilla ride or Gerilla Testing. (No rules and regulations to test a issue)
Exploratory Testing: Level by level of activity coverage of activities in your application
during testing is called exploratory testing. (Covering main activities first and other activities
next)
Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for
whither developed team build is stable for complete testing or not?

Development Team Released Build

Sanity Test / Tester Acceptance Test

Functional & System Testing

Page 17 of 132
Software Testing Material

Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing team
rejects a build to development team with reasons, before start testing.

Bebugging: Development team release a build with known bugs to testing them.

Bigbang Testing: A single state of testing after completion of all modules development is
called Bigbang testing. It is also known as informal testing.

Incremental Testing: A multiple stages of testing process is called as incremental testing.


This is also known as formal testing.

Static Testing: Conduct a test without running an application is called as Static Testing.
Ex: User Interface Testing

Dynamic Testing: Conduct a test through running an application is called as Dynamic


Testing.
Ex: Functional Testing, Load Testing, Compatibility Testing

Manual Vs Automation: A tester conduct a test on application without using any third party
testing tool. This process is called as Manual Testing.

A tester conduct a test with the help of software testing tool. This process is called as
Automation.

Automation (40% -60%)

Impact & Criticality

Need for Automation:


When tools are not available they will do manual testing only. If your company already has
testing tools they may follow automation.

For verifying the need for automation they will consider following two types:

Impact of the test: It indicates test repetition

No1

No2

Multiply

Result

Page 18 of 132
Software Testing Material

Criticality: Load Testing, for 1000 users.

Criticality indicates complex to apply that test manually. Impact indicates test repetition.

Retesting: Re execution of our application to conduct same test with multiple test data is
called Retesting.

Regression Testing: The re execution of our test on modified build to ensure bug fix work
and occurrences of side effects is called regression testing.

Any dependent modules may also cause side effects.

Impacted Passed Tests


Modifie
d Build
Failed Tests
Buil
d

11 Test Fail
Development
10 Tests Passed

Selection of Automation: Before starting one project level testing by one separate testing
team, corresponding project manager or test manager or quality analyst defines the need of
test automation for that project depends on below factors.

Type of external interface:


GUI – Automation.
CUI – Manual.

Size of external interface:


Size of external interface is Large – Automation.
Size of external interface is Small – Manual.

Expected No. of Releases:


Several Releases – Automation.
Less Releases – Manual.

Maturity between expected releases:


More Maturity – Manual.
Less Maturity – Automation.
Tester Efficiency:
Knowledge of automation on tools to test engineers – Automation.
No Knowledge of automation on tools to test engineers – Manual.
Support from Senior Management:

Page 19 of 132
Software Testing Material

Management accepts – Automation.


Management rejects – Manual.

Testing Policy
C.E.O
Company Level
Test Strategy
Test Manager/
QA / PM
Test Methodology

Test Lead Test Plan

Test Cases

Test Procedure

Project Level
Test Lead, Test Test Script
Engineer

Test Log

Defect Report

Test Lead
Test Summary Report

Testing Policy: It is a company level document and developed by QC people. This document
defines testing objectives, to develop a quality software.

Address

Testing Definition : Verification & Validation of S/w


Testing Process : Proper Test Planning before start testing
Testing Standard : 1 Defect per 250 LOC / 1 Defect per 10 FP
Testing Measurements : QAM, TMM, PCM.

CEO Sign

QAM: Quality Assessment Measurements

Page 20 of 132
Software Testing Material

TMM: Test Management Measurements


PCM: Process Capability Measurements
Note: The test policy document indicates the trend of the organization.

Test Strategy:
1. Scope & Objective: Definition, need and purpose of testing in your in your
organization
2. Business Issues: Budget Controlling for testing
3. Test approach: defines the testing approach between development stages and testing
factors.
TRM: Test Responsibility Matrix or Test Matrix defines mapping between test factors
and development stages.
4. Test environment specifications: Required test documents developed by testing team
during testing.
5. Roles and Responsibilities: Defines names of jobs in testing team with required
responsibilities.
6. Communication & Status Reporting: Required negotiation between two consecutive
roles in testing.
7. Testing measurements and metrics: To estimate work completion in terms of Quality
Assessment, Test management process capability.
8. Test Automation: Possibilities to go test automation with respect to corresponding
project requirements and testing facilities / tools available (either complete
automation or selective automation)
9. Defect Tracking System: Required negotiation between the development and testing
team to fix defects and resolve.
10. Change and Configuration Management: required strategies to handle change requests
of customer site.
11. Risk Analysis and Mitigations: Analyzing of future common problems appears during
testing and possible solutions to recover.
12. Training plan: Need of training for testing to start/conduct/apply.

Test Factor: A test factor defines a testing issue. There are 15 common test factors in S/w
Testing.

Ex:

QC – Quality
PM/QA/TM – Test Factor
TL – Testing Techniques
TE – Test cases

PM/QA/TM – Ease of use


TL – UI testing
TE – MS 6 Rules

PM/QA/TM – Portable
TL – Compatibility Testing
TE – Run on different OS

Page 21 of 132
Software Testing Material

Test Factors:
1. Authorization: Validation of users to connect to application
Security Testing
Functionality / Requirements Testing
2. Access Control: Permission to valid user to use specific service
Security Testing
Functionality / Requirements Testing
3. Audit Trail: Maintains metadata about operations
Error Handling Testing
Functionality / Requirements Testing
4. Correctness: Meet customer requirements in terms of functionality
All black box Testing Techniques
5. Continuity in Processing: Inter process communication
Execution Testing
Operations Testing
6. Coupling: Co existence with other application in customer site
Inter Systems Testing
7. Ease of Use: User friendliness
User Interface Testing
Manual Support Testing
8. Ease of Operate: Ease in operations
Installation testing
9. File Integrity: Creation of internal files or backup files
Recovery Testing
Functionality / Requirements Testing
10. Reliability: Recover from abnormal situations or not. Backup files using or not
Recovery Testing
Stress Testing
11. Portable: Run on customer expected platforms
Compatibility Testing
Configuration Testing
12. Performance: Speed of processing
Load Testing
Stress Testing
Data Volume Testing
Storage Testing
13. Service Levels: Order of functionalities
Stress Testing
Functionality / Requirements Testing
14. Methodology: Follows standard methodology during testing
Compliance Testing
15. Maintainable: Whither application is long time serviceable to customers or not
Compliance Testing (Mapping between quality to testing connection)

Quality Gap: A conceptual gap between Quality Factors and Testing process is called as
Quality Gap.

Test Methodology: Test strategy defines over all approach. To convert a over all approach
into corresponding project level approach, quality analyst / PM defines test methodology.
Step 1: Collect test strategy

Page 22 of 132
Software Testing Material

Step 2: Project type

Project Type Information Gathering & Design Coding System Maintenance


Analysis Testing
Traditional Y Y Y Y Y
Off-the-Shelf X X X Y X
Maintenance X X X X Y
Step 3: Determine application type: Depends on application type and requirements the QA
decrease number of columns in the TRM.
Step 4: Identify risks: Depends on tactical risks, the QA decrease number of factors (rows) in
the TRM.
Step 5: Determine scope of application: Depends on future requirements / enhancements, QA
try to add some of the deleted factors once again. (Number of rows in the TRM)
Step 6: Finalize TRM for current project
Step 7: Prepare Test Plan for work allocation.

Testing Process:

Test
Test Test Design Test
Test
Initiation Plannin Executio
Closur
g n
e
Regression
Testing Defect

Test
Report

PET (Process Experts Tools and Technology): It is an advanced testing process developed
by HCL, Chennai.This process is approved by QA forum of India. It is a refinement form of
V-Model.

Page 23 of 132
Software Testing Material

Information Gathering (BRS)

Analysis ( S/wRS )

Design ( HLDD & LLDD ) PM / QA Test Initiation

Coding Test Lead Test Planning

Unit Testing Study S/wRS & Design Docs


+
Integration Testing Test Design
Initial Build

Level – 0 ( Sanity / Smoke / TAT )

Test Automation

Test Batches Creation


(Modified
Build) Next
Select a batch and starts
Bug execution ( Level - 1 )
Resolving (Regression )
(Level – 2)

Defect Independent
Defect If u got any mismatch then
Fixing suspend that Batch
Report

Otherwise

Test Closure

Final Regression / Pre Acceptance /


Release / Post Mortum / Level -3 Testing

User Acceptance Test

Sign Off

Page 24 of 132
Software Testing Material

Test Planning: After completion of test initiation, test plan author concentrates on test plan

What to test - Development Plan


How to test - S/wRS
When to test - Design Documents
Who to test - Team Formation

Development Plan & S/wRS & Team Formation


Design Documents
Identify tactical Risks
Test Plan
Prepare Test Plan
TRM
Review Test Plan

1. Team Formation
In general test planning process starts with testing team formation, depends on below factors.

• Availability of Testers
• Test Duration
• Availability of test environment resources
The above three are dependent factors.

Test Duration:
Common market test team duration for various types of projects.

C/S, Web, ERP projects - SAP, VB, JAVA – Small - 3-5 months
System Software - C, C++ - Medium – 7-9 months
Machine Critical - Prolog, LISP - Big - 12-15 months

System Software Projects: Network, Embedded, Compilers …


Machine Critical Software: Robotics, Games, Knowledge base, Satellite, Air Traffic.

2. Identify tactical Risks


After completion of team formation, test plan author concentrates on risks analysis and
mitigations.

1) Lack of knowledge on that domain


2) Lack of budget
3) Lack of resources(h/w or tools)
4) Lack of testdata (amount)
5) Delays in deliveries(server down)
6) Lack of development process rigor
7) Lack of communication( Ego problems)

3. Prepare Test Plan

Page 25 of 132
Software Testing Material

Format:
1) Test Plan id: Unique number or name
2) Introduction: About Project
3) Test items: Modules
4) Features to be tested: Responsible modules to test
5) Feature not to be tested: Which ones and why not?
6) Feature pass/fail criteria: When above feature is pass/fail?
7) Suspension criteria: Abnormal situations during above features testing.
8) Test environment specifications: Required docs to prepare during testing
9) Test environment: Required H/w and S/w
10) Testing tasks: what are the necessary tasks to do before starting testing
11) Approach: List of Testing Techniques to apply
12) Staff and training needs: Names of selected testing Team
13) Responsibilities: Work allocation to above selected members
14) Schedule: Dates and timings
15) Risks and mitigations : Common non technical problems
16) Approvals: Signatures of PM/QA and test plan author

4. Review Test Plan

After completion of test plan writing test plan author concentrate on review of that document
for completeness and correctness. In this review, selected testers also involved to give
feedback. In this reviews meeting, testing team conducts coverage analysis.

• S/wRS based coverage ( What to test )


• Risks based coverage ( Analyze risks point of view )
• TRM based coverage ( Whither this plan tests all tests given in TRM )

Test Design:
After completion of test plan and required training days, every selected test
engineer concentrate on test designing for responsible modules. In this phase test engineer
prepares a list of testcases to conduct defined testing, on responsible modules.
There are three basic methods to prepare testcases to conduct core level testing.

 Business Logic based testcase design


 Input Domain based testcase design
 User Interface based testcase design

Business Logic based testcase design:


In general test engineers are writing list of testcases depends on usecases / functional
specifications in S/wRS. A usecase in S/wRS defines how a user can use a specific
functionality in your application.

Page 26 of 132
Software Testing Material

BRS

S/wRS
Usecases +
Functional TestCases
Specifications

HLDD

LLDD

Coding .Exe

To prepare testcases depends on usecases we can follow below approach:

Step 1: Collect responsible modules usecases


Step 2: select a usecase and their dependencies ( Dependent & Determinant )
Step 2-1: identify entry condition
Step 2-2: identify input required
Step 2-3: identify exit condition
Step 2-4: identify output / outcome
Step2-5: study normal flow
Step 2-6: study alternative flows and exceptions
Step3: prepare list of testcases depends on above study
Step 4: review testcases for completeness and correctness

TestCase Format:

After completion of testcases selection for responsible modules, test engineer prepare an
IEEE format for every test condition.

TestCase Id : Unique number or name


TestCase Name : Name of the test condition
Feature to be tested : Module / Feature / Service
TestSuit Id : Parent batch Id’s, in which this case is participating as a member.
Priority : Importance of that testcase
Po – Basic functionality
P1 – General Functionality (I/p domain, Error handling …)
P2 – Cosmetic TestCases
(Ex: p0 – os, p1-difft oss, p2 – look & feel)
Test Environment: Required H/w and S/w to execute the test cases
Test Effort: (Person Per Hour or Person / Hr) Time to execute this test case ( 20 Mins )
Test Duration: Date of execution
Test Setup: Necessary tasks to do before start this case execution
Test Procedure: Step by step procedure to execute this testcase.

Page 27 of 132
Software Testing Material

Step No. Action I/p Required Expected Result Defect ID Comments

Test Design Test Execution


TestCase Pass/Fail Criteria: When that testcase is Pass, When that testcase is fail.

Input Domain based TestCase Design:


To prepare functionality and error handling testcases, test engineers are using UseCases or
functional specifications in S/wRS. To prepare input domain testcases test engineers are
depending on data model of the project (ERD & LLD)

Step1: Identify input attributes in terms of size, type and constraints.


(size- range, type – int, float constraint – Primary key)
Step2: Identify critical attributes in that list, which are participating in data retrievals and
manipulations.
Step3: Identify non critical attributes which are input, output type.
Step4: Prepare BVA & ECP for every attribute.

ECP ( Type ) BVA ( Size / Range )


Input Attribute Valid Invalid Minimum Maximum

Fig: Data Matrix

User Interface based testcase design:


To conduct UI testing, test engineer write a list of test cases, depends on our organization
level UI rules and global UI conventions.

For preparing this UI testcases they are not studying S/wRS, LLDD etc…
Functionality testcases source: S/wRS. I/P domain testcases source: LLDD

Testcases: For all projects applicable


Testcase1: Spelling checking
Tesecase2: Graphics checking (alignment, font, style, text, size, micro soft 6 rules)
Testcase3: Meaningful error messages or not. (Error Handling Testing – related message is
coming or not. Here they are testing that message is easy to understand or not)
TestCase4: Accuracy of data displayed (WYSIWYG) (Amount, d o b)
Testcase5: Accuracy of data in the database as a result of user input.
(Tc4 screen level, tc5 at database level)
Form Table

DSN
Bal 66.666 66.7

Page 28 of 132
Software Testing Material

Testcase6: Accuracy of data in the database as a result of external factors?

DS

Mail Server
Image Image
compression Decompression

Mail + Mail +
.Gif .Gif

Import
Testcase7: Meaningful Help messages or not?(First 6 tc for uit and 7 manual support testing)

Review Testcases: After completion of testcases design with required documentation [IEEE]
for responsible modules, testing team along with test lead concentrate on review of testcases
for completeness and correctness. In this review testing team conducts coverage analysis

1. Business Requirements based coverage


2. UseCases based coverage
3. Data Model based coverage
4. User Interface based coverage
5. TRM based coverage

Fig: Requirements Validation / Traceability Matrix.

Business Requirements Sources (Use Cases, Data Model…) TestCases


****** ***** *
*
***** *
*
***** *
*

Page 29 of 132
Software Testing Material

Test Execution:
Development Site Initial Build Testing Site

Level-0 (Sanity /
Smoke / TAT)

Stable Build Test Automation

Defect Report
Defect Fixing
Level-1
8-9 (Comprehensive)
Times

Bug Resolving Level-2 (Regression)


Modified Build

Level-3 (Final
Regression)
Test Execution levels Vs Test Cases:
Level 0 – P0
Level 1– P0, P1 and P2 testcases as batches
Level 2– Selected P0, P1 and P2 testcases with respect to modifications
Level 3– Selected P0, P1 and P2 testcases at build.

Test Harness = Test Environment + Test Bed

Build Version Control: Unique numbering system. ( FTP or SMTP)

Server
Softbase
Build
FTP

Test
Environment

After defect reporting the testing team may receive


• Modified Build
• Modified Programs

Page 30 of 132
Software Testing Material

To maintain this original builds and modified builds, development team use version control
softwares.

Server

1 2

Modified Modified
Build Programs
Test
Environment
Embed into
Old Build

Level 0 (Sanity / Smoke / TAT):


After receiving initial build from development team, testing team install into test
environment. After completion of dumping / installation testing team ensure that basic
functionality of that build to decide completeness and correctness of test execution.

During this testing, testing team observes below factors on that initial build.

1. Understandable: Functionality is understandable to test engineer.


2. Operable: Build is working without runtime errors in test environment.
3. Observable: Process completion and continuation in build is estimated by tester.
4. Controllable: Able to Start/ Stop processes explicitly.
5. Consistent: Stable navigations
6. Maintainable: No need of reinstallations
7. Simplicity: Short navigations to complete task.
8. Automatable: Interfaces supports automation test script creation.

This level-0 testing is also called as Testability or Octangle Testing (bcz based on 8 factors).

Test Automation: After receiving a stable build from development team, testing team
concentrate on test automation.
Test Automation two types: Complete and Selective.
Test Automation

* (All P0 and
Complete Selective carefully
selected P1
Testcases)

Page 31 of 132
Software Testing Material

Level-1: (Comprehensive Testing):


After completion of stable build receiving from development team and automation, testing
team starts test execution of their testcases as batches. The test batch is also known as
TestSuit or test set. In every batch, base state of one testcase is end state of previous testcase.
During this test batches execution, test engineers prepares test log with three types of entries.
1. Passed
2. Failed
3. Blocked
Passed: All expected values are equal to actual.
Failed: Any expected value is variated with actual.
Blocked: Corresponding testcases are failed.

Skip Passed

In Queue In Progress Failed Closed

Partial
Blocked Pass / Fail

Level-2 Regression Testing: Actually this Regression testing is part of Level-1 testing.
During comprehensive test execution, testing team reports mismatches to development team
as defects. After receiving that defect, development team performs modifications in coding to
resolve that accepted defects. When they release modified build, testing team concentrate on
regression testing before conducts remaining comprehensive testing.

Severity: Seriousness of the defect defined by the tester through Severity (Impact and
Criticality) importance to do regression testing. In organizations they will be giving three
types of severity like High, Medium and Low.

High: Without resolving this mismatch tester is not able to continue remaining testing. (Show
stopper).
Medium: Able to continue testing, but resolve must.
Low: May or may not resolve.

Ex: High: Database not connecting.


Medium: Input domain wrong. (Accepting wrong values also)
Low: Spelling mistake.

Xyz are three dependent modules. If u find bug in z, then

Do on z and colleges: High


Full z module: Medium
Partial z module: Low

Page 32 of 132
Software Testing Material

Resolved Bug

Severity

High Medium Less

All P0 All P0 Some P0


All P1 Selected P1 Some P1
Selected P2 Some P2 Some P2

On modified Build to
ensure bug resolving
Possible ways to do Regression Testing:

Case 1: If development team resolved bug and its severity is high, testing team will re
execute all P0, P1 and carefully selected P2 test cases with respect to that modification.

Case 2: If development team resolved bug and its severity is medium, testing team will re
execute all P0, selected P1 [80-90 %] and some of P2 test cases with respect to that
modification.

Case 3: If development team resolved bug and its severity is low, testing team will re execute
some of the P0, P1, P2 test cases with respect to that modification.

Case 4: If development team performs modifications due to project requirement changes,


testing team reexecute all P0 and selected testcases.

Severity: With respect to functionality


Priority: With respect to customer.

Severity: All defects are not with same severity.


Priority: All defects are not with same priority.

Severity: Seriousness of the defect.


Priority: Importance of the defect.

Severity: Project functionality point of view important.


Priority: Customer point of view important.

Page 33 of 132
Software Testing Material

Defect Reporting and Tracking:

During comprehensive test execution, test engineers are reporting mismatches to


development team as defect reports in IEEE format.

1. Defect Id: A unique number or name.


2. Defect Description: Summary of defect.
3. Build Version Id: Parent build version number.
4. Feature: Module / Functionality
5. Testcase name and Description: Failed testcase name with description
6. Reproducible: (Yes / No)
7. If yes, attach test procedure.
8. If No, attach snapshots and strong reasons
9. Severity: High / Medium / Low
10. Priority
11. Status: New / Reopen (after 3 times write new programs)
12. Reported by: Name of the test engineer
13. Reported on: Date of Submission
14. Suggested fix: optional
15. Assign to: Name of PM
16. Fixed by: PM or Team Lead
17. Resolved by: Name of the Developer
18. Resolved on: Date of solving
19. Resolution type:
20. Approved by: Signature of the PM

Defect Age: The time gap between resolved on and reported on.
Defect Submission:

QA

Test Manager Project Manager

Test Lead Team Lead

Test Engineer Developers

Transmittal Reports
Fig: Large Scale Organizations.

Page 34 of 132
Software Testing Material

Defect Submission:

Project Manager

Test Lead Team Lead

Test Engineer Developers

Transmittal Reports
Fig: Small Scale Organizations.
Defect Status Cycle:
New

Fixed (Open, Reject, Deferred)

Closed

Reopen

Bug Life Cycle:

Page 35 of 132
Software Testing Material

Detect Defect

Reproduce Defect

Report Defect

Fix Bug

Resolve Bug

Close Bug

Resolution Type:
Testing Development
Defect Report

Resolution Type

There are 12 resolution types such as


1. Duplicate: Rejected due to defect like same as previous reported defect.
2. Enhancement: Rejected due to defect related to future requirement of the customer.
3. H/w Limitation: Raised due to limitations of hardware (Rejected)
4. S/w Limitation: Rejected due to limitation of s/w technology.
5. Functions as design: Rejected due to coding is correct with respect to design
documents.
6. Not Applicable: Rejected due to lack of correctness in defect.
7. No plan to fix it: Postponed part timely (Not accepted and rejected)
8. Need for More Information: Developers want more information to fix. (Not accepted
and rejected)
9. Not Reproducible: Developer want more information due to the problem is not
reproducible. (Not accepted and rejected)
10. User misunderstanding: (Both argues you r thinking wrong) (Extra negotiation
between tester and developer)
11. Fixed: Opened a bug to resolve (Accepted)
12. Fixed Indirectly: Differed to resolve (Accepted)
Types of Bugs:

Page 36 of 132
Software Testing Material

UI bugs: (Low severity)


Spelling mistake: High Priority
Wrong alignment: Low Priority

Input Domain bugs: (Medium severity)


Object not taking Expected values: High Priority
Object taking Unexpected values: Low Priority

Error Handling bugs: (Medium severity)


Error message is not coming: High Priority
Error message is coming but not understandable: Low Priority

Calculation bugs: (High severity)


Intermediate Results Failure: High Priority
Final outputs are Wrong: Low Priority

Service Levels bugs: (High severity)


Deadlock: High Priority
Improper order of Services: Low Priority

Load condition bugs: (High severity)


Memory leakage under load: High Priority
Doesn't allows customer expected load: Low Priority

Hardware bugs: (High severity)


Printer not connecting: High Priority
Invalid printout: Low Priority

Boundary Related Bugs: (Medium Severity)

Id control bugs: (Medium severity) Wrong version no, Logo

Version Control bugs: (Medium severity) Difference between two consecutive versions

Source bugs: (Medium severity) Mismatch in help documents

Test Closure:
After completion of all possible testcase execution and their defect reporting and tracking,
test lead conduct test execution closure review along with test engineers.

In this review test lead depends on coverage analysis:

• BRS based coverage


• UseCases based coverage (Modules)
• Data Model based coverage (i/p and op)
• UI based coverage (Rules and Regulations)
• TRM based coverage (PM specified tests are covered or not)

Page 37 of 132
Software Testing Material

Analysis of the differed bugs:


Whither deferred bugs are postponable or not.

Testing team try to execute the high priority test cases once again to confirm correctness of
master build.

Final Regression Process:


Gather requirements
Effort estimation (Person/Hr)
Plan Regression
Execute Regression
Report Regression

Final Regression Testing:

Gather
requirements

Report Effort
Regression estimation

Execute Plan
Regression Regression

User Acceptance Testing:


After completion of test execution closure review and final regression, our organization
concentrates on UAT to collect feed back from customer / customer site like people.
There are two approaches:
1. Alpha testing
2. Beta testing

SignOff:
After completion of UA and then modifications, test lead creates Test Summary Report
(TSR). It is a part of s/w release note. This TSR consists of

1. Test Strategy / Methodology (what tests)


2. System Test Plan (schedule)
3. Traceability Matrix (mapping requirements and testcases)
4. Automated Test Scripts (TSL + GUI map entries)
5. Final Bug summary Report

Page 38 of 132
Software Testing Material

Bug Id Description Found By Status(Closed / Severity Module / Comme


Deferred) Functionality nts

Case Study (Schedule for 5 Months):

Deliverable Responsibility Completion Time


TestCase Selection Test Engineer 20-30 days
TestCase Review Test Lead, Test Engineer 4-5 days
RVM / RTM Test Lead 1 day
Sanity & Test Automation Test Engineer 20-30 days
Test Execution as Batches Test Engineer 40-60 days
Test Reporting Test Engineer & Test Lead On going during test
execution
Communication and Status Everyone in testing team Weakly twice
Reporting
Final Regression Testing & Test Engineer and Test Lead 4-5 days
Closer Review
User Acceptance Testing Customer Site People 5-10 days
( Involvement of Testing Team)
Test Summary Report Test Lead 1-2 days
(Sign Off)

Testing computer software – Cem Kamer


Effective methods for software testing – William E Perry
Software Testing Tools – Dr. K.V.K.K. Prasad
Nagaraju_testing@yahoo.com

What u r doing?
What type of testing process going on ur company?
What type of test documentation prepared by ur organization?
What type of test documentation u will prepare?
Whats ur involvement in that?
What are key components of ur company test plan?
What type of format u prepare for test cases?
How ur pm selects what type of tests need for ur project?
When u will go to automation?
What is regression testing? When u will do this?
How u report defects to development team?
How u know whither defect accepted or rejected?
What u do when ur defect rejected?
How u will learn project with out documentation?
What is the difference between defect age and Build interval period?
How u will do test without documents?
What do u mean by green box testing?

Experience on winrunner
Exposure to td…
Winrunner 8/10.
Load runner 7/10.

Page 39 of 132
Software Testing Material

Auditing:
During testing and maintenance, testing team conducts audit meetings to estimate
status and required improvements. In this auditing process they can use three types of
measurements and metrics.

Quality Measurement Metrics:


These measurements are used by QA or PM to estimate
achievement of quality in current project testing [monthly once]

Product Stability:

N
o.
20% Testing – 80% Bugs
O
f
80% Testing – 20% Bugs
bu
g
s

Duration

Sufficiency:
• Requirements Coverage
• Type – Trigger Analysis (Mapping between covered requirements and applied tests)

Defect Severity Distribution Organization trend limit check:


• Organisation trend limit check

Test Management Measurements:


These measurements are used by test lead during test execution of current project [weakly
twice]

Test Status
• Executed tests
• In progress
• Yet to execute

Delays in Delivery
• Defect Arrival Rate
• Defect Resolution Rate
• Defect Aging
Test Effort
• Cost of finding a defect (Ex: 4 defects / person day)

Page 40 of 132
Software Testing Material

Process Capability Measurements:


These measurements are used by quality analyst and test management to improve the
capability of testing process for upcoming projects testing. (It depends on old projects
maintenance level feedback)

Test Efficiency
• Type-Trigger Analysis
• Requirements Coverage

Defect Escapes
• Type-Phase analysis.
(What type of defects my testing team missed in which phase of testing)

Test Effort
• Cost of finding a defect (Ex: 4 defects / person day)

This topic looks at Static Testing techniques. These techniques are referred to as "static"
because the software is not executed; rather the specifications, documentation and source
code that comprise the software are examined in varying degrees of detail.

There are two basic types of static testing. One of these is people-based and the other is tool-
based. People-based techniques are generally known as “reviews” but there are a variety of
different ways in which reviews can be performed. The tool-based techniques examine source
code and are known as "static analysis". Both of these basic types are described in separate
sections below.

What are Reviews?


“Reviews” is the generic name given to people-based static techniques. More or less any
activity that involves one or more people examining something could be called a review.
There are a variety of different ways in which reviews are carried out across different
organisations and in many cases within a single organisation. Some are very formal, some are
very informal, and many lie somewhere between the two. The chances are that you have been
involved in reviews of one form another.
One person can perform a review of his or her own work or of someone else’s work.
However, it is generally recognised that reviews performed by only one person are not as
effective as reviews conducted by a group of people all examining the same document (or
whatever it is that is being reviewed).

Review techniques for individuals


Desk checking and proof reading are two techniques that can be used by individuals to
review a document such as a specification or a piece of source code. They are basically the
same processes: the reviewer double-checks the document or source code on their own. Data
stepping is a slightly different process for reviewing source code: the reviewer follows a set
of data values through the source code to ensure that the values are correct at each step of the
processing.

Review techniques for groups


The static techniques that involve groups of people are generically referred to as reviews.
Reviews can vary a lot from very informal to highly formal, as will be discussed in more
detail shortly. Two examples of types of review are walkthroughs and Inspection. A

Page 41 of 132
Software Testing Material

walkthrough is a form of review that is typically used to educate a group of people about a
technical document. Typically the author "walks" the group through the ideas to explain them
and so that the attendees understand the content. Inspection is the most formal of all the
formal review techniques. Its main focus during the process is to find faults, and it is the most
effective review technique in finding them (although the other types of review also find some
faults). Inspection is discussed in more detail below.
Reviews and the test process

Benefits of reviews
There are many benefits from reviews in general. They can improve software development
productivity and reduce development timescales. They can also reduce testing time and cost.
They can lead to lifetime cost reductions throughout the maintenance of a system over its
useful life. All this is achieved (where it is achieved) by finding and fixing faults in the
products of development phases before they are used in subsequent phases. In other words,
reviews find faults in specifications and other documents (including source code) which can
then be fixed before those specifications are used in the next phase of development.
Reviews generally reduce fault levels and lead to increased quality. This can also result in
improved customer relations.

Reviews are cost-effective


There are a number of published figures to substantiate the cost-effectiveness of reviews.
Freedman and Weinberg quote a ten times reduction in faults that come into testing with a
50% to 80% reduction in testing cost. Yourdon in his book on Structured Walkthroughs
found that faults were reduced by a factor of ten. Gilb and Graham give a number of
documented benefits for software Inspection, including 25% reduction in schedules, a 28
times reduction in maintenance cost, and finding 80% of defects in a single pass (with a
mature Inspection process) and 95% in multiple passes.

What can be Inspected?


Anything written down can be Inspected. Many people have the impression that Inspection
applies mainly to code (probably because Fagan's original article was on "Design and code
inspection"). However, although Inspection can be performed on code, it gives more value if
it is performed on more "upstream" documents in the software development process. It can be
applied to contracts, budgets, and even marketing material, as well as to policies, strategies,
business plans, user manuals, procedures and training material. Inspection also applies to all
types of system development documentation, such as requirements, feasibility studies and
designs. It is also very appropriate to apply to all types of test documentation such as test
plans, test designs and test cases. In fact even with Fagan's original method, it was found to
be very effective applied to testware.
What can be reviewed?
Anything that can be Inspected can also be reviewed, but reviews can apply to more things
than just those ideas that are written down. Reviews can be done on visions, strategic plans
and "big picture" ideas. Project progress can be reviewed to assess whether work is
proceeding according to the plans. A review is also the place where major decisions may be
made, for example about whether or not to develop a given feature.
Reviews and Inspections are complementary. Inspection excludes discussion and solution
optimising, but these activities are often very important. Any type of review that tries to
combine more than one objective tends not to work as well as those with a single focus. It

Page 42 of 132
Software Testing Material

works better to use Inspection to find faults and to use reviews to discuss, come to a
consensus and make decisions.
What to review / Inspect?
Looking at the ‘V’ life cycle diagram that was discussed in Session 2, reviews and
Inspections apply to everything on the left-hand side of the V-model. Note that the reviews
apply not only to the products of development but also to the test documentation that is
produced early in the life cycle. We have found that reviewing the business needs alongside
the Acceptance Tests works really well. It clarifies issues that might otherwise have been
overlooked. This is yet another way to find faults as early as possible in the life cycle so that
they can be removed at the least cost.
Costs of reviews
You cannot gain the benefits of reviews without investing in doing them, and this does have a
cost. As a rough guide, something between 5% and 15% of project effort would typically be
spent on reviews. If Inspections are being introduced into an organisation, then 15% is a
recommended guideline. Once the Inspection process is mature, this may go down to around
5%. Note that 10% is half a day a week.
Remember that the cost of reviews always needs to be balanced against the cost of not doing
them, and finding the faults (which are already there) much later when it will be much more
expensive to fix them.
The costs of reviews are mainly in people's time, i.e. it is an effort cost, but the cost varies
depending on the type of review. The leader or moderator of the review may need to spend
time in planning the review (this would not be done for an informal review, but is required
for Inspection). The studying of the documents to be reviewed by each participant on their
own is normally the main cost (although in practice this may not be done as thoroughly as it
should). If a meeting is held, the cost is the length of the meeting times the number of people
present. The fixing of any faults found or the resolution of issues found may or may not be
followed up by the leader. In the more formal review techniques, metrics or statistics are
recorded and analysed to ensure the continued effectiveness and efficiency of the review
process. Process improvement should also be a part of any review process, so that lessons
learned in a review can be folded back into development and testing processes. (Inspection
formally includes process improvement; most other forms of review do not.)

Types of review
We have now established that reviews are an important part of software testing. Testers
should be involved in reviewing the development documents that tests are based on, and
should also review their own test documentation.
In this section, we will look at different types of reviews, and the activities that are done to a
greater or lesser extent in all of them. We will also look at the Inspection process in a bit
more detail, as it is the most effective of all review types.

Characteristics of different review types


Informal review
As its name implies, this is very much an ad hoc process. Normally it simply consists of
someone giving their document to someone else and asking them to look it over. A document
may be distributed to a number of people, and the author of the document would hope to
receive back some helpful comments. It is a very cheap form of review because there is no
monitoring of metrics, no meeting and no follow--up. It is generally perceived to be useful,

Page 43 of 132
Software Testing Material

and compared to not doing any reviews at all, it is. However, it is probably the least effective
form of review (although no one can prove that since no measurements are ever done!)
Technical review or Peer review
A technical review may have varying degrees of formality. This type of review does focus on
technical issues and technical documents. A peer review would exclude managers from the
review. The success of this type of review typically depends on the individuals involved -
they can be very effective and useful, but sometimes they are very wasteful (especially if the
meetings are not well disciplined), and can be rather subjective. Often this level of review
will have some documentation, even if just a list of issues raised. Sometimes metrics will be
kept. This type of review can find important faults, but can also be used to resolve difficult
technical problems, for example deciding on the best way to implement a design.
Decision-making review
This type of review is closely related to the previous one (in fact the syllabus does not
distinguish them). In this type of review, which may be technical or managerial, the focus is
on discussing the issues, coming to a consensus and making decisions, for example about
whether a given feature should be included in the next release or not.
Walkthrough
A walkthrough is typically led by the author of a document, for the purpose of educating the
participants about the content so that everyone understands the same thing. A walkthrough
may include "dry runs" of business scenarios to show how the system would handle certain
specific situations. For technical documents, it is often a peer group technique.
Inspection
An Inspection is the most formal of the formal review techniques. There are strict entry and
exit criteria to the Inspection process, it is led by a trained Leader or moderator (not the
author), there are defined roles for searching for faults based on defined rules and checklists.
Metrics are a required part of the process.
Characteristics of reviews in general
Objectives and goals
The objectives and goals of reviews in general normally include the verification and
validation of documents against specifications and standards.
Some types of review also have an objective of achieving a consensus among the attendees
(but not Inspection).
Some types of review have process improvement as a goal (this is formally included in
Inspection).
Activities
There are a number of activities that may take place for any review.
The planning stage is part of all except informal reviews.
In Inspection (and possibly other reviews), an overview or kickoff meeting is held to put
everyone "in the picture" about what is to be reviewed and how the review is to be conducted.
This pre-meeting may be a walkthrough in its own right.
The preparation or individual checking is usually where the greatest value is gained from a
review process. Each person spends time on the review document (and related documents),
becoming familiar with it and/or looking for faults. In some reviews, this part of the process
is optional (at least in practice). In Inspection it is required.
Most reviews include a meeting of the reviewers. Informal reviews probably do not, and
Inspection does not hold a meeting if it would not add economic value to the process.
Sometimes the meeting time is the only time people actually look at the document.

Page 44 of 132
Software Testing Material

Sometimes the meetings run on for hours and discuss trivial issues. The best reviews (of any
level of formality) ensure that value is gained from the meeting.
The more formal review techniques include follow-up of the faults or issues found to ensure
that action has been taken on everything raised (Inspection does, as do some forms of
technical or peer review).
The more formal review techniques collect metrics on cost (time spent) and benefits
achieved.
Roles and responsibilities
For any of the formal reviews (i.e. not informal reviews), there is someone responsible for the
review of a document (the individual review cycle). This may be the author of the document
(walkthrough) or an independent Leader or moderator (formal reviews and Inspection). The
responsibility of the Leader is to ensure that the review process works. He or she may
distribute documents, choose reviewers, mentor the reviewers, call and lead the meeting,
perform follow-up and record relevant metrics.
The author of the document being reviewed or Inspected is generally included in the review,
although there are some variants that exclude the author. The author actually has the most to
gain from the review in terms of learning how to do their work better (if the review is
conducted in the right spirit!).
The reviewers or Inspectors are the people who bring the added value to the process by
helping the author to improve his or her document. In some types of review, individual
checkers are given specific types of fault to look for to make the process more effective.
Managers have an important role to play in reviews. Even if they are excluded from some
types of peer review, they can (and should) review management level documents with their
peers. They also need to understand the economics of reviews and the value that they bring.
They need to ensure that the reviews are done properly, i.e. that adequate time is allowed for
reviews in project schedules.
There may be other roles in addition to these, for example an organisation-wide co-ordinator
who would keep and monitor metrics, or someone to "own" the review process itself - this
person would be responsible for updating forms, checklists, etc.
Deliverables
The main deliverable from a review is the changes to the document that was reviewed. The
author of the document normally edits these. For Inspection, the changes would be limited to
faults found as violations of accepted rules. In other types of review, the reviewers suggest
improvements to the document itself. Generally the author can either accept or reject the
changes suggested.
If the author does not have the authority to change a related document (e.g. if the review
found that a correct design conflicted with an incorrect requirement specification), then a
change request may be raised to change the other document(s).
For Inspection and possibly other types of review, process improvement suggestions are a
deliverable. This includes improvements to the review or Inspection process itself and also
improvements to the development process that produced the document just reviewed. (Note
that these are improvements to processes, not to reviewed documents.)
The final deliverable (for the more formal types of review, including Inspection) is the
metrics about the costs, faults found, and benefits achieved by the review or Inspection
process.

Page 45 of 132
Software Testing Material

Pitfalls
Reviews are not always successful. They are sometimes not very effective, so faults that
could have been found slip through the net. They are sometimes very inefficient, so that
people feel that they are wasting their time. Often insufficient thought has gone into the
definition of the review process itself - it just evolves over time.
One of the most common causes for poor quality in the review process is lack of training, and
this is more critical the more formal the review.
Another problem with reviews is having to deal with documents that are of poor quality.
Entry criteria to the review or Inspection process can ensure that reviewers' time is not wasted
on documents that are not worthy of the review effort.
A lack of management support is a frequent problem. If managers say that they want reviews
to take place but don't allow any time in the schedules for the, this is only "lip service" not
commitment to quality.
Long-term, it can be disheartening to become expert at detecting faults if the same faults keep
on being injected into all newly written documents. Process improvements are the key to
long-term effectiveness and efficiency.
Inspection
Typical reviews versus Inspection
There are a number of differences between the way most people practice reviews and the
Inspection process as described in Software Inspection by Gilb and Graham, Addison-
Wesley, 1993.
In a typical review, the document is given out in advance, there are typically dozens of pages
to review, and the instructions are simply "Please review this."
In Inspection, it is not just the document under review that is given out in advance, but also
source or predecessor documents. The number of pages to focus the Inspection on is closely
controlled, so that Inspectors (checkers) check a limited area in depth - a chunk or sample of
the whole document. The instructions given to checkers are designed so that each individual
checker will find the maximum number of unique faults. Special defect-hunting roles are
defined, and Inspectors are trained in how to be most effective at finding faults.
In typical reviews, sometimes the reviewers have time to look through the document before
the meeting, and some do not. The meeting is often difficult to arrange and may last for
hours.
In Inspection, it is an entry criterion to the meeting that each checker has done the individual
checking. The meeting is highly focused and efficient. If it is not economic, then a meeting
may not be held at all, and it is limited to two hours.
In a typical review, there is often a lot of discussion, some about technical issues but much
about trivia. Comments are often mainly subjective, along the lines of "I don't like the way
you did this" or "Why didn't you do it this way?"
In Inspection, the process is objective. The only thing that is permissible to raise as an issue is
a potential violation of an agreed Rule (the Rulesets are what the document should conform
to). Discussion is severely curtailed in an Inspection meeting or postponed until the end. The
Leader's role is very important to keep the meetings on track and focused and to keep pulling
people away from trivia and pointless discussion.
Many people keep on doing reviews even if they don't know whether it is worthwhile or not.
Every activity in the Inspection process is done only if its economic value is continuously
proven.

Page 46 of 132
Software Testing Material

Inspection is more
Inspection contains many mechanisms that are additional to those found in other formal
reviews. These include the following:
Entry criteria, to ensure that we don't waste time Inspecting an unworthy document;
Training for maximum effectiveness and efficiency;
Optimum checking rate to get the greatest value out of the time spent by looking
deep;
Prioritising the words: Inspect the most important documents and their most important
parts;
Standards are used in the Inspection process; there are a number of Inspection
standards also;
Process improvement is built in to the Inspection process
Exit criteria ensure that the document is worth and that the Inspection process was
carried out
correctly
One of the most powerful exit criteria is the quantified estimate of the remaining defects per
page. This may be say 3 per page initially, but can be brought down by orders of magnitude
over time.
Inspection is better
Typical reviews are probably only 10% to 20% effective at detecting existing faults. The
return on investment is usually not known because no one keeps track even of their cost.
When Inspection is still being learned, its effectiveness is around 30% to 40% (this is
demonstrated in Inspection training courses). Once Inspection is well established and mature,
this process can find up to 80% of faults in a single pass, 95% in multiple passes. The return
on investment ranges from 6 hours to 30 for every hour spent.
The Inspection process
The diagram shows a product document infected with faults. The document must pass
through the entry gate before it is allowed to start the Inspection process. The Inspection
Leader performs the planning activities. A Kickoff meeting is held to "set the scene" about
the documents and the process.
The Individual Checking is where most of the benefits are gained. 80% or more of the faults
found will be found in this stage.
A meeting is held (if economic). The editing of the document is done by the author or the
person now responsible for the document. This involves redoing some of the activities that
produced the document initially, and it also may require Change Requests to documents not
under the control of the editor. Process improvement suggestions may be raised at any time,
for improvements either to the Inspection process or to the development process.
The document must pass through the Exit gate before it is allowed to leave the Inspection
process. There are two aspects to investigate here: is the product document now ready (e.g.
has some action been taken on all issues logged), and was the Inspection process carried out
properly? For example, if the checking rate was too fast, then the checking has not been done
properly.
A gleaming new improved document is the result of the process, but there is still a "blob" on
it. It is not economic to be 100% effective in Inspection. At least with Inspection you

Page 47 of 132
Software Testing Material

consciously predict the levels of remaining faults rather than fallaciously assuming that we
have found them all!
How the checking rate enables deep checking in
Inspection
There is a dramatic difference of Inspection to normal reviews, and that is in the depth of
checking. This is illustrated by the picture of a document. Initially there are no faults visible.
Typically in reviews, the time and size of document determine the checking rate. So for
example if you have 2 hours available for a review and the document is 100 pages long, then
the checking rate will be 50 page per hour. (Any two of these three factors determine the
third.)
This is equivalent to "skimming the surface" of the document. We will find some faults - in
this example we have found one major and two minor faults. Our typical reaction is now to
think: "This review was worthwhile wasn't it - it found a major fault. Now we can fix that and
the two other minor faults, and the document will now be OK." Think: are we missing
anything here?
Inspection is different. We do not take any more time, but it is the optimum rate for the type
of document that is used to determine the size of the document that will be checked in detail.
So if the optimum rate is one page per hour and we have two hours, then the size of the
sample or chunk will be 2 pages.
(Note that the optimum rate needs to be established over time for different types of document
and will depend on a number of factors, and it is based on prioritised words (logical page
rather than physical page). Of course it doesn't take an hour just to read a single page, but the
checking done in Inspection includes comparing each paragraph or sentence on the target
page with all source documents, checking each paragraph or phrase against relevant rule sets,
both generic and specific, working through checklists for different role assignments, as well
as the time to read around the target page to set the context. If checking is done to this level
of thoroughness, it is not at all difficult to spend an hour on one page!)
How does this depth-oriented approach affect the faults found? On the picture, we have gone
deep in the Inspection on a limited number of pages. We have found the major one found in
the other review plus two (other) minors, but we have also found a deep-seated major fault,
which we would never have seen or even suspected if we had not spent the time to go deep.
There is no guarantee that the most dangerous faults are lying near the surface!
When the author comes to fix this deep-seated fault, he or she can look through the rest of the
document for similar faults, and all of them can then be corrected. So in this example we will
have corrected 5 major faults instead of one. This gives tremendous leverage to the
Inspection process - you can fix faults you didn't find!
Inspection surprises
To summarise the Inspection process, there are a number of things about Inspection which
surprise people. The fundamental importance of the Rules is what makes Inspection objective
rather than a subjective review. The Rules are democratically agreed as applying (this helping
to defuse author defensiveness) and by definition a fault is a Rule violation.
The slow checking rates are surprising, but the value to be gained by depth gives far greater
long-term gains than surface-skimming review that miss major deep-seated problems.
The strict entry and exit criteria help to ensure that Inspection gives value for money.
The logging rates are much faster than in typical reviews (30 to 60 seconds; typical reviews
log one thing in 3 to 10 minutes). This ensures that the meeting is very efficient. One reason

Page 48 of 132
Software Testing Material

this works is that the final responsibility for all changes is fully given to the author, who has
total responsibility for final classification of faults as well as the content of all fixes.
More information on Inspection can be found in the book Software Inspection, Tom Gilb and
Dorothy Graham, Addison-Wesley, 1993, ISBN 0-201-63181-4.
Static analysis
What can static analysis do?
Static analysis is a form of automated testing. It can check for violations of standards and can
find things that may or may not be faults. Static analysis is descended from compiler
technology. In fact, many compilers may have static analysis facilities available for
developers to use if they wish. There are also a number of stand-alone static analysis tools for
various different computer programming languages. Like a compiler, the static analysis tool
analyses the code without executing it, and can alert the developer to various things such as
unreachable code, undeclared variables, etc.
Static analysis tools can also compute various metrics about code such as cyclomatic
complexity.
Data flow analysis
Data flow analysis is the study of program variables. A variable is basically a location in the
computer's memory that has a name so that the programmer can refer to it more conveniently
in the source code. When a value is put into this location, we say that the variable is
"defined". When that value is accessed, we say that it is "used".

For example, in the statement "x = y + z", the variables y and z are used because the values
that they contain are being accessed and added together. The result of this addition is then put
into the memory location called “x”, so x is defined.
The significance of this is that static analysis tools can perform a number of simple checks.
One of these checks is to ensure that every variable is defined before it is used. If a variable is
not defined before it is used, the value that it contains may be different every time the
program is executed and in any case is unlikely to contain the correct value. This is an
example of a data flow fault. Another check that a static analysis tool can make is to ensure
that every time a variable is defined it is used somewhere later on in the program. If it isn’t,
then why was defined in the first place? This is known as a data flow anomaly and although
can be a perfectly harmless fault, it can also indicate something more serious is at fault.
Control flow analysis
Control flow analysis can find infinite loops, inaccessible code, and many other suspicious
aspects. However, not all of the things found are necessarily faults; defensive programming
may result in code that is technically unreachable.
Cyclomatic complexity
Cyclomatic complexity is related to the number of decisions in a program or control flow
graph. The easiest way to compute it is to count the number of decisions (diamond-shaped
boxes) on a control flow graph and add 1. Working from code, count the total number of IF's
and any loop constructs (DO, FOR, WHILE, REPEAT) and add 1. The cyclomatic
complexity does reflect to some extent how complex a code fragment is, but it is not the
whole story.
Other static metrics
Lines of code (LOC or KLOC for 1000’s of LOC) is a measure of the size of a code module.
Operands and operators is a very detailed measurement devised by Halstead, but not much
used now. Fan-in is related to the number of modules that call (in to) a given module.
Modules with high fan-in are found at the bottom of hierarchies, or in libraries where they are

Page 49 of 132
Software Testing Material

frequently called. Modules with high fan-out are typically at the top of hierarchies, because
they call out to many modules (e.g. the main menu). Any module with both high fan-in and
high fan-out probably needs re-designing.
Nesting levels relate to how deeply nested statements are within other IF statements. This is a
good metric to have in addition to cyclomatic complexity, since highly nested code is harder
to understand than linear code, but cyclomatic complexity does not distinguish them.
Other metrics include the number of function calls and a number of metrics specific to object-
oriented code.
Limitations and advantages
Static analysis has its limitations. It cannot distinguish "fail-safe" code from real faults or
anomalies, and may create a lot of spurious failure messages. Static analysis tools do not
execute the code, so they are not a substitute for dynamic testing, and they are not related to
real operating conditions.
However, static analysis tools can find faults that are difficult to see and they give objective
quality information about the code. We feel that all developers should use static analysis
tools, since the information they can give can find faults very early when they are very cheap
to fix.

WinRunner 7.0
 Developed by Mercury Interactive
 Functionality testing tool ( Not suitable to Performance, Usability and Security
Testing)
 Supports c/s and web technologies ( VB, vc++, java, d2k, power builder, Delphi,
HIML etc…
 WinRunner wont supports .Net, XML, SAP, People Soft, Maya, Flash, oracle
applications etc…
 To support .Net, XML, SAP, People Soft, Maya, Flash, XML, oracle applications
etc… we can use QTP ( Quick Test Professional )
 QTP is an extension of WinRunner.

WinRunner Recording Process:

Page 50 of 132
Software Testing Material

Learning

Recording

Edit Script

Run Script

Analyze that Test Results

Learning: Recognization of objects and windows in your application by testing tool is called
Learning.

Recording: A test engineer records our manual process in winrunner to automate.

Edit Script: Test Engineer inserts required check points into that recorded test script.

Run Script: A test engineer executes automated test script to get results.

Analyze Results: A test engineer analyzes test results to concentrate on defect tracking.
User Id *****

Passwor
*****
d

Ok

Exp: Ok enabled after entering user id and password.

Explain Icons in WinRunner

Note: WinRunner 7.0 provides auto learning facility to recognize objects and windows in
your project without your interaction.

Every statement ends with ; like C.

Test Script: A test script consists of Navigational Statements & Check Points. In winrunner
scripting language is also called as TSL ( Test Script Language ) like as C.

Page 51 of 132
Software Testing Material

Add-in Manager: This window provides a list of WinRunner supported technologies with
respect to our purchased license.
Note: If all options in Add in Manager are off by default it supports VB, VC++ interface
(Win32 API).

Recording Modes: To record our business operations (Navigations) in winrunner we can use
2 types of recording modes.

1. Context Sensitive mode (Default Mode)


2. Analog mode

Analog Mode: To record mouse pointer movements on the desktop, we can use this mode. In
Analog Mode tester maintains constant monitor resolution and application position during
recording and learning
Application areas: Digital Signatures, Graphs drawing, image movements.

Note:
1. In analog mode, WinRunner records mouse pointer movements with respect to
desktop co-ordinates. Due to this reason, test engineer maintains corresponding
context sensitive mode window in default position in recording and running.
2. If u want to use Analog mode for recording, we can maintain monitor resolution
as constant during recording and running.

move_locator_track() : WinRunner use this function to record mouse pointer movements on


the desktop in one unit of time.

Syntax: move_locator_track(track number);

By default it starts with 1. It is not based on time. But based on operation.

mtype(): WinRunner uses this operations this function to record mouse pointer operations on
the desktop.

Syntax: mtype(<T Track Number> < K key on the mouse used > + / - );

Ex: mtype ("<T20><kLeft>+");

Track no – Deck top coordinates in which you operate the mouse. It stores the mouse
coordinates. Actually it is a memory location.

Type(): We can use this function to record keyboard operations in analog mode.

Syntax: type(“Typed characters”/”ASCII notation”);

Context Sensitive mode: To record mouse and key board operations on our application
build, we can use this mode. It is a default mode.

Context Sensitive Mode: In general functionality test engineer creates automation test
scripts in Context Sensitive mode with required check points. In this mode WinRunner

Page 52 of 132
Software Testing Material

records our application operation with respect to objects and windows. To record mouse and
key board operations on our application build, we can use this mode. It is a default mode.
Ex:
Focus to Window Set_window(“Window Name”, time);
TextBox Edit_set(“Edit Name”,”Typed Characters”);
Password text box Password_edit_set(“Pwd Object”,”Encrypted Pwd”);
Push Button Button_press(“Button Name”);
Radio Button Button_set(“Button Name”,ON);
Button_set(“Button Name”,OFF);
Check Box Button_set(“Button Name”,ON);
Button_set(“Button Name”,OFF);
List/Combo Box List_select_item(“List1”, “Selected Item”);
Menu Menu_select_item(“Menu Name; Option Name”);

Base State: An application state to start test is called as Base State.


End State: An application state to stop test is called as Base State.
Call State: An intermediate state of an application between base state and end state is call
state.

Functionality Testing Techniques:


Behavioral Coverage ( Object Properties Checking ).
Input Domain Coverage ( Correctness of Size and Type of every i/p Object ).
Error Handling Coverage ( Preventing negative navigation ).
Calculations Coverage ( correctness of o/p values ).
Backend Coverage ( Data Validation & Data Integrity of database tables ).
Service Levels ( Order of functionality or services ).
Successful Functionality ( Combination of above all )

Check points: WinRunner is a functionality testing tool, it provides a set of facilities to cover
below sub tests.

To automate above sub tests, we can use 4 check points in WinRunner:


1. GUI check points
2. Bitmap check points
3. Data Base check points
4. Text check points

GUI Check point: To automate behavior of objects we can use this check point. It consists of
sub options.

1. For Single Property


2. For Object/Window
3. For Multiple Properties

For Single Property: To test a single property of an object we can use this option.

Navigation: select a position in Script, Create Menu, GUI check point, for single property,
select testable object(Double Click), select required property with expected, click paste.

Ex: Update Object

Page 53 of 132
Software Testing Material

Update Order
Focus to Window Disable
Open a Record Disable
Perform Change Enable

Syntax: object_check_info("Object Name", "Property", Expected value);

Ex: button_check_info("Update Order","enabled",0);

If the checkpoints are for numeric value, then no need for double quotes.
If the checkpoints are for string value, then place the data in between double quotes.
But winrunner takes any value by default in string with double quotes.

Problem:

Focus to Window – Item No should be Focused


Ok enabled after filling itemno & qty.

NagaRaju Shopping

Item No

Quantity

Ok

Expected: No. of Items in Fly to equal to, no of items in Fly From -1, when you select an
item in fly from.

NagaRaju Journey

Fly From

Fly To

Ok

Ex: if u select an item in a list box then the no of items in next list boxes decreased by 1.

Problem:
Focus to Window – Ok should be Disabled

Page 54 of 132
Software Testing Material

Enter Roll No - Ok should be disabled


Enter Name – Ok should be disabled
Enter Class – Ok should be disabled

NagaRaju Shopping

Item No
NagaRaju Journey

Quantity
List1
Ok

List2

List3

Ok

Problem: If type is A, Age is Focused, If type is B, Gender is Focused, If type is C,


Qualification is Focused. Else others is focused. (use switch stmt)

Type

Age Gender Qualification Others

switch(x)
{
case “A”: edit_check_info(“Age”,”focused”, 1);
break;

List
Page 55 of 132

Text
Software Testing Material

Ok

Exp: Selected Item in List box appears in text box after Clicking Ok button.

Exp: Selected Item in List box appears in Sample 2 text object after clicking display button.

Sample1 Sample2

List1 Display

Text
Ok

NagaRaju Employee

Emp No

Dept No

Ok

B Sal Comm

Problem:
If basic salary >= 10000 then commission = 10% of basic salary.
Else If basic salary in between 5000 & 10000 then commission = 5% of basic salary.
Else If basic salary < 5000 then commission = 200 Rs.

Problem:
If Total >= 800 then Grade = A.
Else If Total in between 800 & 700 then Grade = B.
Else Grade = C.

Roll No

Ok Page 56 of 132

Grade
Software Testing Material

For Object/Window: To test more than one properties of a single object, we can use this
option.

Ex: Update Object


Update Order
Focus to Window Disable
Open a Record Disable
Perform Change Enable & Focused

Syntax: obj_check_gui(“obj name”, “Check List File.ckl”, “expected values file.txt”, time to
create );
In the above syntax check list file specifies list of properties to test of a single object. And its
extension is .ckl
Expected values file specifies list of expected values for that selected or testable properties.
And its extension is .txt

Ex: obj_check_gui("Update Order", "list1.ckl", "gui1", 1);

For Multiple Objects: To test more than one property of more than one object in a single
checkpoint we can use this option. To create this checkpoint tester selects multiple objects in
a single window.

Ex:
Insert Order Update Order Delete Order
Focus to Window Disable Disable Disable
Open a Record Disable Disable Enable
Perform Change Disable Enable & Focused Enable

Navigation: select position in script, create menu, GUI checkpoint, For Multiple Objects,
click add, select testable objects, right click to relieve, specify expected for required
properties for every selected object, click ok.

Syntax: win_check_gui("Object Name", "Check List File.ckl", "Expected Values File", Time
to Create);

Ex: win_check_gui("Flight Reservation", "list3.ckl", "gui3", 1);

Case Study: What type of properties you check for what objects?

Object Type Properties


Push Button Enabled, Focused
Radio Button Status ( On , Off )
Check Box Status ( On , Off )

Page 57 of 132
Software Testing Material

List Box Count ( No of items in List Box ), Value ( Current Selected Value )
Table Grid Rows, Columns, Table Content
Text / Edit Box Enabled, Focused, Value, Range, Regular Expression, Date Format, Time
Format

Changing Check Points:

WinRunner allows us to perform changes in the existing check points. There are 2 types of
changes in existing checkpoints due to project sudden changes or tester mistake.

1. Change expected values


2. Add new properties to test

Change expected values


Wr allows u to perform changes in expected values in existing checkpoints
Navigarion: execute test script, click results, perform changes in expected values in results
window of required, click ok, reexecute the test script to get right result

Add new properties to test


Sometimes test engineer add extra properties to existing checkpoint due to incompleteness of
test through below navigation.

Navigation: Create menu, edit gui check list, select check list file name, click ok, select new
properties to test, click ok, to overwrite, change run mode to update, click run executed
(default values selected as exp values), click run in verify mode to get results, perform
changes in result if required

Enable ON
d OFF
Focuse
d Default Value
Value
Running Modes in WinRunner:
Verify mode: in this mode wr compare our expected values with actual.
Update mode: in this runmode, default values select as expected value
Debug mode: to run our test scripts line by line.

During GUI check point creation Winrunner creates checklist files and expected values files
in HardDisk. Winrunner maintains the test scripts by default in tmp folder

Script: c:\program files\mi\wr\tmp\testname\script


Checklists: c:\program files\mi\wr\tmp\testname\chklist\list1.ckl
Exp values: c:\program files\mi\wr\tmp\testname\exp\gui1

Input Domain Coverage: Range and Size

Page 58 of 132
Software Testing Material

Navigation: Create Menu, GUI Check point, for object/window, select object, select range
property, enter from & to values, click ok.

Syntax: obj_check_gui(“obj name”, “What Property you are checking”, “Range of Values
from & To”, time to create )
In the above syntax check list file specifies list of properties to test of a single object. And its
extension is .ckl
Expected values file specifies list of expected values for that selected or testable properties.
And its extension is .txt

Ex: obj_check_gui("Update Order", "list1.ckl", "gui1", 1);

NagaRaju Sample

Age

Input Domain Coverage: Valid and Invalid Classes

Navigation: Create Menu, GUI Check point, for object/window, select object, select Regular
Expression property, enter Expected Expression as []*, click ok.

Syntax: obj_check_gui(“obj name”, “What Property you are checking”, “Range of Values
from & To”, time to create )
In the above syntax check list file specifies list of properties to test of a single object. And its
extension is .ckl
Expected values file specifies list of expected values for that selected or testable properties.
And its extension is .txt

Ex: obj_check_gui("Update Order", "list1.ckl", "gui1", 1);

Problem: The Name text box should allow only lower level characters

NagaRaju Sample

Name

1. Alphabets in lower case and initial cap only


2. Alpha numeric and starting and ending with alphabets only
3. Alphabets in lower case but starts with R ending with o only
4. Alphabets in lower case with Under Score in middle
5. Alphabets in lower case with Space and Under Score in middle

Page 59 of 132
Software Testing Material

Bitmap Check Point: It is an optional checkpoint in functionality testing tool. Tester can use
this checkpoint to compare images, logos, graphs and other graphical objects.( Like
signatures)

This checkpoint consists of two sub types:


1. For Object/Window (Entire Image Testing)
2. For Screen Area (Part of Image Testing)

These options supports testing on static images only. WinRunner doesn't support dynamic
images developed using Flash, Maya…

For Object/Window: To compare our expected image with actual image in your application
build, we can use this option.

Navigation: select a position in script, create menu, bitmap checkpoint, for object/window,
select image object.
obj_check_bitmap(“Image object Name”,
“Expected image file. Bmp”, time to create the image check point)

win_check_bitmap("About Flight Reservation System", "Img1", 1);

Run on different versions.

Expected – Record time


Actual – Run time
Differences – what are differences

For Screen Area (Part of Image Testing): To compare our expected image part with actual
image in your application build, we can use this option.

Navigation: select a position in script, create menu, bitmap checkpoint, for Screen Area,
select required region in testable image, right click to releave.

obj_check_bitmap(“Image object Name”, “Image file. Bmp”, time to create the check point,
x, y, width, height )

win_check_bitmap("About Flight Reservation System", "Img2", 1, 191, 29, 122, 71);

Run on different versions.

Expected – Record time


Actual – Run time
Differences – what are differences

Note: TSL supports variable size of parameter line a function overloading


For every project functionality testing, gui checkpoint is obligatory to use. By bitmap check
point used by tester depends on requirements

Database Check Point: To conduct backend testing using WinRunner we can use this
option.

Page 60 of 132
Software Testing Material

Back End Testing: Validating Completeness and Correctness of front end operation impact
on the backend tables. This process is also known as the database testing. In general, the
Backend testing is also known as validation or data and integrity of data.

To automate this test, Database checkpoint provides three sub options


1. Default Check (Depends on Content)
2. Custom Check (Depends on rows count, columns count and content)
3. Runtime Record Check (New option in WinRunner7.0)

Application DataBase

DSN

Front End Back End

Default Check: To check data validation and data integrity in database, depends on content,
we can use this option.

DSN: Data Source Name. It is a connection string between front end and back end. It will
maintain the connection process.

Steps:
1. Connect to the database
2. Execute the select statement
3. Return results in Excel Sheet
4. Analyze the results manually

Application DataBase

DSN

Front End 1
Back End
Data Base
Check Point 2 Select
Wizard

In bitmap checking test between two versions of images.

Page 61 of 132
Software Testing Material

In GUI checking test same application but with expected behavior.


In Database checking test twice on the original data.

To conduct testing, test engineer collects some information from development team.

• Connection String or DSN


• Table definitions or Data dictionary
• Mapping between front end forms and backend tables.

Database Testing Process:


Create Database checkpoint (Current content of database selected as Expected.)
Insert / Delete / Update operation through front end.
Execute Database checkpoint (Current content of database selected as Actual)

Navigation: In GUI & Bitmap checkpoints we will starts with selecting the position in script.
Create Menu, Database Checkpoint, default checkpoint, specify connection to database
(ODBC / Data Junction) , select sql statement(c:\\PF\MI\WR\temp\testname\msqr1.sql), click
next, click create to select DSN, write select statement ( select * from orders ), click finish.

Syntax: db_check("Check List File .cdl", "Query Result File.Xls(EXE File)");


Ex: db_check("list5.cdl", "dbvf5");

Criteria: Expected Difference – Pass


Wrong Difference – Fail

What Updated: Data Validation


Who and when updated: Data Integrity
New Record - Green Color.
Modified Record - Yellow Color.
Custom Check: Test engineer use this option to conduct backend testing depends on rows
count or columns count or table content or combination of above three properties.
Default Checkpoint: Content is Property & Content is Expected
Custom Checkpoint: Rows Count is Property & No of rows is Expected.
During Custom check point creation, winrunner provides a facility to select these properties,
in general test engineers are using default check option as maximum. Because content is also
suitable to find the number of rows and columns.
Syntax: db_check("Check List File .cdl", "Query Result File.Xls(EXE File)");
db_check("list11.cdl", "dbvf8");

Page 62 of 132
Software Testing Material

A B

X Y
1 1

Expected: 2 2

X A
Y B 3 3

Front End – Programmers (Programming Division)


Back End – DataBase Administrators (DB Division)

The front objects names should be understandable to the end user. (WYSIWYG)

Runtime Record Checkpoint: Sometimes test engineer use this option to find mapping
between front end objects and backend columns, it is optional checkpoint.

Navigation: Create Menu, Database Checkpoint, runtime record check, specify SQL
statement, click next, click create to select DSN, write select statement with doubtful columns
( select orders.order_number, orders.customername from orders), select doubtful front end
objects for that columns, click next, select any of below options
• Exactly one match
• One or more match
• No match record
Click finish.

Note: For custom and default check points you have to give ; at the end of the sql statement.
But in Runtime record check point u have no need to give it.

Syntax: db_record_check("Check list File Name.cvr", DVR_ONE_MATCH/


DVR_ONE_MORE_MATCH / DVR_NO_MATCH, Variable);

Ex: db_record_check("list1.cvr", DVR_ONE_MATCH, record_num);

In the above syntax checklist specifies expected mapping to test and variable specifies
number of records matched. If mapping correct the same values will be presented.

Runtime record checkpoint allows you to perform changes in existing mapping, through
below navigation.

Page 63 of 132
Software Testing Material

Create menu, edit runtime recordlist, select checklist file name, click next, change query (if u
want to test on new columns), click next, change object selection for new objects testing,
click finish.

Synchronization:
To define the time mapping between testing tool and application, we can use synchronization
point concepts.

Wait(): To define fixed waiting time during test execution, test engineer use this function.

Syntax: wait (time in seconds);

Ex: wait (10);

Drawback: This function defines fixed waiting time, but our applications are taking variable
times to complete, depends on test environment.

Change Runtime settings:


During our test script execution, winrunner doesn't depends on recording time parameters. To
maintain any waiting state, in winrunner we can use wait () function or change runtime
settings.

It maintains mainly following information:


Delay: Time to wait between window focusing
Timeout: How much time application should wait for context sensitive stage and
checkpoints.

There are two runtime settings time parameters


Delay: For window synchronization
Timeout: For execute in context sensitive and check points

Window based statements are not able to execute: Delay + Timeout.


Object based statements are not able to execute: Timeout.

Navigation: Settings, General options, run tab, change delay & timeout depends on
requirement, click apply, click ok.

Window statements : delay – 1sec


To focus – 10 sec

Set_window(“”,6) ;
--
time = 11sec
--
2.
button_press(ok);
time = 10
3. button_check_info(“ok”,”enabled”,1);

Page 64 of 132
Software Testing Material

time = 10sec

Drawbacks in Change Settings: If you are changing the Settings once they will be applied
to each and every test without user specifications.
Due to this most of the times they are not using this change runtime settings option.
Now a days most of the test engineers are using the for object / window property for
avoiding the time mismatch problems

For Object/Window Property:

Navigation: Select position in script, create menu, synchronization point, for object/window
property, select object, specify property with expected ( Ex: Status / Progress Bar – 100%
completed and enabled…), specify maximum time to wait, click ok.

Syntax: obj_wait_info("Object Name", "Property", Expected Value, Maximum time to


wait);

Ex: obj_wait_info("Insert Done...","enabled",1,10);

For Object / Window Bitmap:

Sometimes test engineer defines time mapping between tool and project depends on images
in that application.

Navigation: Select position in script, create menu, synchronization point, for object/window
Bitmap, select the required image,

Syntax: obj_wait_info("Object Name", "image1.bmp", Maximum time to wait);

For Screen Area Bitmap:


Sometimes test engineer defines time mapping between tool and project depends on image
area in that application.

Navigation: Select position in script, create menu, synchronization point, for Screen Area
Bitmap, select the required image region, right click to releave.

Syntax: obj_wait_info("Object Name", "image1.bmp", Maximum time to wait, x, y, width,


height);

Text Check Point:


To cover calculation and other text based tests, we can use this option / concept in
WinRunner.

To create this type of check points in testing, we can use this “Get Text” option from the
create menu.

This option consists of two sub options :


1. From object / window
2. From screen area

Page 65 of 132
Software Testing Material

From object / window: To capture object values into variables we can use this option.

Navigation: Create Menu, Get Text, From Object / Window, select required object (Dbl
Click)

Syntax: obj_get_text("Object Name", Variable);

Ex: obj_get_text("Flight No:", text);

Syntax: obj_get_info("Object Name", "Property", Variable);

Ex: obj_get_info("ThunderTextBox_3","value",v1);

From Screen Area: To capture static text in your application build screen we can use this
option.

Navigation: Create Menu, Get Text, From Screen area, select required required region to
capture value [+sign] , right click to relieve

Syntax: obj_get_text("Object Name", Variable, x1,y1,x2,y2);

Ex: obj_get_text("Flight No:", text,2,3,50,60);

NagaRaju Sample

Input

Output

Item No Quantity

Ok

Price $ Total

Retesting: Re execution of our test on same application build, with multiple test data is
called retesting. In WinRunner retesting is also called as Data Driven Test (DDT).

Data is driving or changing to test the application.

Page 66 of 132
Software Testing Material

In WinRunner test engineers are conducting Retesting in 4 ways


1. Dynamic test data submission
2. Through flat file (notepad)
3. From front end grids (List box)
4. Through excel sheet

During the test execution based on first type, tester gives values based on that test execution
will be completed (like scanf() in C)
But in the remaining three types can be done with out tester execution.

Dynamic test data submission: To conduct retesting, to validate functionality, test engineer
submits required test data to tool dynamically.
To read keyboard values during the test execution, test engineer use below TSL statements.

Syntax: create_input_dialog(“ Message “);

Ex: create_input_dialog(“ Enter Your Account Number : “);

Build
Key Board

Test Script

No1

No2

Multiply

Result

Exp: res = no1 * no2

Item No Quantity

Ok

Price $ Total $

Page 67 of 132
Software Testing Material

Tl_step(): tl stands for test log. Test log means that test result. We can use this function to
define user defined pass or fail message.

Pass – green – 0
Fail – red – 1

Password_edit_set(“pwd”,password_encrypt(y))

User Id

Password

Login Next

Sample 2 Sample 1

Display Text1

Text2 Ok

Problem:
First enter EmpNo and Click Ok Button. Then it will displays bsal, comm. and gsal.
Exp: gsal = bsal + comm.
Bsal >= 15000 then comm. is 15%
If bsal between 15000 and 8000 then commission is 5%
If bsal < 8000 then comm. is 200.

Page 68 of 132
Software Testing Material

Through flat file (notepad)


Sometimes test engineer conducts data driven testing, depends on multiple test data in flat
files (like notepad .txt files).

To manipulate file data for testing test engineer uses below TSL functions

file_open(): To load required flat file into RAM, with specified permissions, we can use
function.
Syntax: file_open(“Path of the File”,FO_MODE_READ/ FO_MODE_WRITE/
FO_MODE_APPEND);

file_getline(): We can use this function to read a line from a opened file.
Syntax: file_getline( “Path of the File”, Variable);
Like in C file pointer incremented automatically.

file_close(): We can use function to swap out a opened file into RAM.
Syntax: file_close(“Path of the File”);

file_printf(): We can use this function to write specified text into a opened file in WRITE
or APPEND mode.
Syntax: file_printf(“Path of the File”, ”Format”, what values you want to write or which
variable values you want to write);

%d - integer, %s - string, %f – floating point, \n – New Line, \t - Tab, \r – Carriage return

Substr: we can use this function to separate a substring from given string.
Syntax: Substr (main string, start position, length of substring);

Split: we can use this function to divide a string to field:


Syntax: Split(main string, array name, separator);
In the above syntax separator must be a single character.

File-compare: to compare two file contents.


Syntax: file_compare(“ path of file1”, “ path of file2”, “path of file3”);
File3 is optional. And it specifies concatenate of content of both files.

Page 69 of 132
Software Testing Material

Build
File Values

.txt
Test Script

No1

No2

Multiply

Result

Exp: res = no1 * no2

Item No Quantity

Ok

Price $ Total $

User Id

Password

Login Next

Page 70 of 132
Software Testing Material

From Front End Grids (ListBox):


Sometimes test engineer conducts retesting depends on multiple test data objects (like list
box).
To manipulate file data for testing test engineer uses below TSL functions

list_get_item(): We can use this function to capture specified list box item through item
number.

list_get_item(“ListBox Name”,Item No,Variable);

list_select_item(): We can use this function to select specified list box item through
given variable.

list_select_item(“ListBox Name”,Variable);

list_get_info(): We can use this function to information about the specified property(like
enabled, focused, count) of list box item into given variable.

list_get_info(“ListBox Name”, Property, variable);

NagaRaju Journey

Fly From

Fly To

Ok
Page 71 of 132
Data
Build
Software Testing Material

Test Script

Sample 2 Sample 1

Display Text1

Text2 Ok

NagaRaju Sample1 NagaRaju Sample2

List1 Display

Text
Ok

Type

Age Gender Qualification Others

Data Driven Testing: In generally test engineers are creating data driven tests, depends on
excel sheet data.

Page 72 of 132
Software Testing Material

Loop

--
-- Build
Test Data --

Excel Sheet
Data
Test Script

From Excel Sheet:


In general test engineers are creating retest test scripts depnds on multiple test data in excel
sheet. To generate this type of script, test engineer use data driven test wizard. In this type of
retesting, test engineer fills excel sheet with test data in two ways.

1. From data base tables using select statement (Back End)


2. Our own test data

Navigation:
Create test script for one input, tools menu, data driven wizard, click next, browse the
path of the excel sheet ( path ), specify variable name to assign path of excel sheet ( by
default table as variable ), select add statements to create ddt, select import data from
database, optimized text 1. line by line 2. automatically, click next, specify connection
to database, specify database connection (ODBC/Data Junction), select specify sql
statement mssql1.sql , click next, click create to select dsn (machine data sourse –
flight32), write select statement to capture database content for testing into excel sheet,
specify position to replace excel sheet column in ur test script, select show data table
now, click finish.

Test
Script
Col1 Col2 Col3

C3 = c1 + c2

Problems:

1. Prepare a data driven program to find factorial of given number. Write result into
same excel sheet.

2. Prepare a TSL script to write a list box item into excel sheet one by one.

Page 73 of 132
Software Testing Material

Ddt_open(): We can use this function to open excel sheet into Ram. In specified mode.

Syn: ddt_open(“path of file”, DDT_MODE_READ/ DDT_MODE_READWRITE)

This function will returns E_FILE_OPEN when that file is opened into RAM. Else it returns
E_FILE_NOT_OPEN.

Ddt_update_from_db(): To extend excel sheet data depends on dynamic changes in the


database ( Insert , Delete, Update

Syn: Ddt_update_from_db(“path of excel sheet”, “path of query file”, variable);

In the above syntax variable specifies that how many rows newly altered.

Ddt_save(): To save recent modifications in excel sheet.


Ddt_save(“ path of excel sheet”);

Ddt_get_row_count(): To find the no of rows in excel sheet.

Ddt_get_row_count(“path of excel sheet”, variable)


Var stores the no of rows in sheet.

Ddt_set_row(): To point a row in excel sheet.


Ddt_set_row(“path of excel file”,row no):

Ddt_val(): To read a value from specified column & pointed row.


Ddt_val(“path of excel file”,col no):

Ddt_set_val(): To write a value into a specified column and pointed row.


Ddt_set_val(“path of excel file”, “col name”, value or variable):

Ddt_close(): To swap out excel sheet from ram


Ddt_close(“path of excel file”):

Write a program to write list box items into a excel sheet one by one.

Test Suite / Test Batch:- Arranging all tests in one proper order based on their functionality.
It gives what test output is used as a input to all other values.

Batch Testing: In general test engineers are executing their scripts as batches. Every batch
consists of a set of tests, they all are dependent.
In every batch end state of one test is base state to next test. When you are executing our tests
as batches you are getting a chance to increase our probability of defect detection.

Syntax: call “test name” ()


call “path of the test”();
In the above syntax we can use first one, when calling and called tests both are in the same
folder. We can use second syntax when both are in different folders.

Page 74 of 132
Software Testing Material

Calling Test Test Name1

Call TestName() ---


--- ---
--- ---
---

Main Test SubTest

Parameter passing: Winrunner allows you to pass arguments between, calling test to called
test, or main test to subtest.

Navigation: Open subtest, file menu, test properties, select parameters table, click add to
create more parameters, click apply, click ok, use that parameters in required place to test
script.

From the above model main test is passing values to subtest. To receive that values, subtest
maintains a set of parameter variables.

Data Driven Batch Test: WinRunner allows you to execute our batches with multiple test
data.

Calling Test Test Name n = 10

Call TestName(n) ---


--- ---
--- ---
---

Main Test SubTest

texit(): sometimes test engineers are using the statement in test script to stop test execution in
the middle of the process.

Treturn(): we can use this statement to return a value from a called test to a calling test.

Treturn(variable or value);
Treturn(10);

Page 75 of 132
Software Testing Material

Calling Test Test Name n = 10

Temp = Call TestName(n) Edit_set(“”,n);


If(temp==1) If(condition)
Printf(); Treturn(0)
Else Else
Printf(); Treturn(1);

Main Test SubTest

Silent Mode: In general winrunner returns pause message, any standard checkpoint is failed
during test execution. If u want to execute our tests scripts without any initiation when a
checkpoint is failed we can follow below navigation to define silent mode.

Navigation: Settings, general options, run tab, select “run in batchmode” option, click apply,
click ok.

Fail Test1 Next Enabled

Test2

Test3

Test4

Fail Test1 Sample


Windo
w
appear
s
Test2

Test3

Test4
Page 76 of 132
Software Testing Material

window appears:
if (win_exists(“sample”) == E_OK)

win_exists() we can use this function to find existence of a window. In the desktop in min,
max or hidden position.

Syn: win_exists(“ window name “, time);


Time – is optional.

Homework:

Login after 5 secs.


If next enabled go to next window.
Else try for other user.

Shopping:
Prepare above batch test for ten users which information available in excel sheet during this
batch execution tester passing item no & quantity as parameters.

User Defined Functions: Like as programming languages winrunner also provides a facility
to create user defined functions. In TSL user defined functions are created by test engineer to
initiate repeatable navigation.

In the above example, test engineer creates four automation test scripts to test four different
functionalities depends on functionality dependency. Test engineers are calling this login
process as base state.

Public / static function function name(in/out/inout argument name, ……)


{
Repeatable Navigation

return (value or variable )


}

if u want to create a user defined function to maintain end state of one time execution is base
state to next execution we can use static functions.

But static maintains constant locations for internal variables in that current test execution. Out
put of one test execution is input to other test.

Page 77 of 132
Software Testing Material

a = 100
Static
a=0
---
---
---
a = 100
Test

Note1: User Defined Functions allows only context sensitive statements and Control
statements and doesn't allow check points and Analog statements.

Note2: In batch testing one test calling other test through saved test name. One test invoking
one function depends on function name. to call one function in test, that function .exe must
reside in RAM.
Public function add (in a, in b, out c)
{
c = a + b;
}

Calling Test:
X = 6;
Y = 6;
Add( x , y , z );
Printf(z);

Page 78 of 132
Software Testing Material

Public function add (in a, in b)


{
c = a + b;
return c;
}

Calling Test:
X = 6;
Y = 6;
Z = Add( x , y);
Printf(z);

Public function add (in a, inout b)


{
c = a + b;
}

Calling Test:
X = 6;
Y = 6;
Add( x , y );
Printf(y);

In - general args
Out – return values
Inout – both

Return: to return one value

Note: udf allow only cs statements & control stmts and doesn't allow check points & analog
statements.

Compiled Module: Open winrunner and build, click new in winrunner, record repeatable
navigations as user defined functions, save that test in dat folder, file menu, test properties,
general tab, change test type to compiled module, click apply, click ok, write load() statement
of that compiled module in startup script of winrunner.

Note: WinRunner maintains a default program as a startup script. This script executed
automatically when u launching winrunner. In this script we can write load() statement to
load your function.

Page 79 of 132
Software Testing Material

Load(“ Name of the compiled Module”, 0/1,0/1)

0-User Defined compiled module


1-system Defined compiled module

0-Path appears in the winrunner window menu


1-Hides the path

unload(): We can use this function to unload unwanted functions from RAM.

Syntax: unload(“ Path of the Compiled Module “, “ Unwanted Function Name “);

Reload(): We can use this function to reload, unloaded functions again.

Syntax: reload(“ Path of the compiled Module”, 0/1,0/1)

0-User Defined compiled module


1-system Defined compiled module

0-Path appears in the winrunner window menu


1-Hides the path

Predefined Functions: These functions are also known as built in functions or system
defined functions. WinRunner provides a facility to search required tsl function in a library,
called function generator.

To search for a required function in function generator, we can follow below navigation.
Create menu, insert function, from function generator, select required category, select
required function depends on description, enter arguments, click paste.

invoke_application(): WinRunner allows you to open a project automatically.


invoke_application("Path of .exe", "commands", "working directory" , SW_SHOW /
SW_HIDE / SW_MINIMIZE / SW_RESTORE /SW_SHOWMAXIMIZED /
SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE);

Commands – Used in X Runner for Unix OS.

Working directory – At the time of running the temporary files are stored in this directory. If
u didn’t specify any directory by default it takes c:\windows\temp folder.

Executing a Prepared Query:

Db_connect(): We can use this function to connect to database using existing DSN or
Connection.

Syntax: db_connect(“Session Name”, “DSN=*******”);

Ex: db_connect("Query1","DSN=Flight32");

Page 80 of 132
Software Testing Material

Db_execute_query(): We can use this function to execute required “Select” statement on


connected database.

Syntax: db_execute_query(“Session Name”,”DSN=******”,variable”);

Ex: db_execute_query("Query1","select * from orders where order_number <= "&x,rno);

Db_write_records(): We can use this function to write query results into specified file.

Syntax: db_write_records(“Session Name”, “File Path”, TRUE/FALSE, NO_LIMIT”);

Ex: db_write_records("Query1","nrdbc1.xls",TRUE,NO_LIMIT);

Extra Functions in WinRunner:


Some times test engineers are adding user defined function names to generator to maintain
user defined functions for future references.

To do this task, we can use below DSN statements.


Generator_add_category():- We can use this function to create a new category generator.
Syntax: Generator_add_category(“ Category Name “);

Ex: Generator_add_category(“ NagaRaju “);

Generator_add_function: we can use this function to add your user defined function name to
all functions category.

Syntax: generator_add_function(“Function Name”, “Description”, arity, “Argument name”,


“argument type”, “default value”, - - - - -);

Generator_add_function(“ name”, “description”, 5, “a”, “browse()”, “”, “b”,


“point_window()”, “”, “c”, “point_object”, “”, “d”, “select_list(0 1 2 3 4 5)”, “”, “e”,
“type_edit”, “”);
Browse() - is for file path
Point_window() – is for window message
Point_object() is for object types
Select_list() is for selecting list of items
Type_edit() is for if we no need of all out to select we take this function (by default we use
space).

Generator_add_function_to_category():
Generator_add_function_to_category(“category name”, “function name”):
Note:
We can execute above third function after completion of second function execution.
We can write above three statements in start up script of WinRunner
Select TSL functions for:

1. Prepare TSL to execute below Prepared Query. (select * from orders where order_number
<= x and order_number >= y)

2. Change time out without using settings

Page 81 of 132
Software Testing Material

Setval(“time out”, time);


System category function.

3. Find parent directory of WinRunner(Where WinRunner installed in your computer)


Getenv(“M_HOME”);
Getenv(“M_ROOT”);

4. Point SystemDate:
Get_time (only time not date)
Time_str

5. What is the difference between invoke_application(); and system();


One .exe is enough for invoke_application.
To open one application system() means through title of the software.

System category there are 8 functions usually in interviews

Syntax:
1. dos_system(): To execute DOS commands.
2. time_str(): To capture system date with time.
3. get_time(): To capture system time value.
4. getvar(): To capture system variable values ex: Timeout, delay
5. setvar(): To change sytem variable values
6. getenv(): To capture environment information ex: m_home(), m_root()
7. system(): To open an application using title of the software.
8. invoke_application(): To open an application using .exe path.

Built in functions / Predefined Functions:


All TSL language functions are available in “Function Generator”. Test engineer select
required function depends on requirements (depends on Automation needed) through below
navigation.
Create menu, insert function, from Function Generator, Select Category, Select Function
name with arguments, Click Paste.

Clip board testing: A tester conduct a test on selected content of an object is called clip
board testing.

1.Edit_get_selection(“ obj name”, var) – specified selected objects.

Difference between edit_get_selection() and obj_get_text().

Some part of application can be tested ie called as clip board testing and All entire application
can be tested is called as general testing.

win_exists() we can use this function to find existence of a window. In the desktop in min,
max or hidden position.

Syn: win_exists(“ window name “, time);


Time – is optional.

Page 82 of 132
Software Testing Material

Open Application: WinRunner provides a facility to open your project automatically (System
Category Function).

invoke_application(): WinRunner allows you to open a project automatically.


invoke_application("Path of .exe", "commands", "working directory" , SW_SHOW /
SW_HIDE / SW_MINIMIZE / SW_RESTORE /SW_SHOWMAXIMIZED /
SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE);

SW- Set Window


SW_SHOW – Focus to Window.

Commands – Used in X Runner for Unix OS.

Working directory – At the time of running the temporary files are stored in this directory. If
u didn’t specify any directory by default it takes c:\windows\temp folder.

Executing a Prepared Query:

Db_connect(): We can use this function to connect to database using existing DSN or
Connection.

Syntax: db_connect(“Session Name”, “DSN=*******”);

Ex: db_connect("Query1","DSN=Flight32");

Db_execute_query(): We can use this function to execute required “Select” statement on


connected database.

Syntax: db_execute_query(“Session Name”,”DSN=******”,variable”);


Ex: db_execute_query("Query1","select * from orders where order_number <= "&x,rno);

Db_write_records(): We can use this function to write query results into specified file.

Syntax: db_write_records(“Session Name”,”Destination File Path”, TRUE/FALSE”,


NO_LIMIT);
TRUE – With Header. FALSE – Without Header
Ex: db_write_records(“Query1”,”Nr.txt”, TRUE”, NO_LIMIT);

Db_disconnect(): We can use this function to remove database connection establishment.

Syntax: db_disconnect(“Session Name”);


Ex: db_connect("Query1");

3. win_exists():
4. open application,
execute prepared query(db_disconnect)

Learning: In general a test automation process starts with learning. Learning means that
recognization of objects and windows in your application by testing tool.

Page 83 of 132
Software Testing Material

WR 7.0 supports auto learning and pre learning.

Auto Learning: During recording, WR recognizes objects and windows with respect to
tester operations. This type of auto recognization is called as Auto Learning.

Steps:
Start recording
Recognize object
Script generation
Catch entries
Catch objects
1
WinRunner Build

Button_press(“Ok”);

Ok

2
5
3

Logical Name: Ok
{ GUI Map
class: Push Button
label: Ok
}

So before closing the WinRunner we have to do two tasks.

Save script
Save GUI Map

Disadvantage of winrunner is without entries it won’t works.

Note: If GUI Map is empty, our existing test scripts are not able to execute. To maintain
these entries longtime along with our test scripts, we can follow two possible administrations.

Global GUI MAP File: From the above model test engineer creates a global GUI Map file
and maintains explicitly in hard disk. By default WinRunner allows you to create global GUI
Map file.

Page 84 of 132
Software Testing Material

Test1

--
-- GUI Map
--
Test2 --
-- Save --
-- -- --
-- --
Open

Test3 .gui (HDD)

--
-- Explicitly (Using File Menu in
-- GUI MAP Editor).

Per Test Mode: It is a new option in WinRunner 7.0. In this mode winrunner implicitly
handles entries in GUI MAP.

From the above model WR maintains auto process for save and open of entries with respect
to test. Due to this reason, WR increase entry redundancy (Repetition) when an object /
window participate in more than one test. By default WR follows Global GUI Map. If you
want to change to per test mode, we can follow below navigation.

Navigation: Settings, general options, environment tab, select the gui map file per test, click
apply, click ok.

Test .gui
1 GUI Map

-- --
-- -- --
-- -- --
.gui
Test --
2 Save
-- --
-- Ope --
n .gui
Test
3
-- --
-- Implicitly --
-- --
Pre Learning: Sometimes winrunner7.0 testers are also follows pre learning concept, before
start recording. Due to this reason, pre learning is only suitable for global GUI Map.

Page 85 of 132
Software Testing Material

Navigation: open project, create menu in winrunner, Rapid Test Script Wizard, click next,
show application main window, click next, (Select No Tests), click next, specify sub menu
symbols (.., >>, ->), click next, specify learning mode (Express or Comprehensive), learn,
(after learning) say yes or no to open project automatically, click next, remember the paths of
start up script and gui map file, click ok.

In general test engineers are following the auto learning concept in global GUI map file. They
are not using auto learning with per test mode and pre learning regularly.

Difference between Auto learning and Pre learning:

Auto Learning Pre Learning


During Recording Before Recording
No need for extra navigation Using RTSW
Global GUI Map file or Per Test Mode Per Test Mode

Depends on test requirements, winrunner test engineers are performing changes in


corresponding objects or windows recognization entries. There are six types of situations to
perform changes in GUI map entries.

You will change the entries of GUI Map in 6 ways.

1. Wild Card Character


2. Regular Expressions
3. Virtual Object Wizard
4. Mapped to standard class
5. GUI Map Configuration
6. Selective Recording

Wild Card Character: Sometimes window or object labels are variating with respect to
inputs in your application. To create data driven test on this type of windows and objects we
can perform changes in corresponding entries in GUI Map.
The Wild Card characters can be used to organize entries in WinRunner using !,*

Fax Order No. 6


{
class: window,
label: "Fax Order No.6",
MSW_class: "#32770"
}

Fax Order No. 6


{
class: window,
label: "!Fax Order No.*",
MSW_class: "#32770"
}

Page 86 of 132
Software Testing Material

Regular Expressions: Sometimes in your application build objects / windows labels are
variating depends on the events. We are changing in logical name. Winrunner at runtime will
catches entries it used in logical name with respect to runtime point.

Start
{
class: push_button,
label: "![S][t][ao][a-z]*"
}

for(i=1;i<=5;i++)
{
set_window ("Personal Web Manager", 3);
button_press ("Start");
printf(" Button Pressed is : "&i);
}

For Number change: Wild card Characters


If Toggle Characters: Regular Expression

GUI Map Configuration: Sometimes in your application more than one object consists of
same physical description with respect to WinRunner defaults (Class and Labels). To
recognize this object individually we can perform changes in GUI map configuration.

It is used when one object is not recognized by the tool, then in WinRunner it recognizes by
using this feature.

Navigation: Tools, GUI Map Configuration, Select Object Type, Click Configure, Select
distinguishable properties into obligatory and optional (In general Test engineers are
maintaining mswid as optional), click ok.

If class and label are same. We select mswid (Micro Soft Window ID)
If applicable properties and obligatory properties are same we use optional (mswid).

Command1
{
class: push_button,
label: Command1,
MSW_id: 1
}

Note: Here we can maintain MSWID as assistive. Because every two objects consists of
different mswids.

Mapped to Standard Class: Sometimes test engineers are not getting required properties of
an object. This is used when one object is recognized but the required properties are not
coming to that object. Then map this object to any of the standard matching object and get the
required properties.

Navigation: Tools, GUI Map Configuration, Select non testable Object, Click Ok, Click

Page 87 of 132
Software Testing Material

Configure, select mapped to class, click ok.

Virtual Object Wizard: To forcibly recognize, non recognized objects we can use this
option.

Navigation: Tools, Virtual Object Wizard, Click Next, Select expected type depends on
nature of the object, click next, mark that non recognized object area, right click to relieve,
click next, enter logical name to new entry, say yes/no to create more, click finish.

Selective Recording:
It is a new concept in WinRunner 7.0. In WinRunner if u have more than one
application on the desktop at the time of recording it may record about the unnecessary
application details also in the TSL if you didn’t specify exactly what application u need. For
this type of situations in WinRunner we are specifying it explicitly using this path.

Settings -> General Options -> Record Tab, click selective recording, select record only on
selected application(By default Off), Record on Start Menu and windows explorer, browse
required application path, click ok.

Note: Selective recording is a new concept in WinRunner7.0. This concept is not applicable
to analog mode, because WinRunner records operations with respect to desktop co-ordinates
in analog mode.

User Interface Testing: WinRunner is a functionality testing tool. But it provides a facility
to conduct user interface testing. In this user interface automation testing, WinRunner
depends on Micro Soft 6 Rules.

Micro Soft 6 Rules:


1. Controls are Init Cap
2. Ok/Cancel existence
3. System menu existence
4. Controls are visible
5. Controls are not overlapped
6. Controls are aligned
To apply the above six rules in your application build, WinRunner use below TSL functions.

Load_os_api(): WinRunner use this function to maintain path between, windows OS system
calls and application programming interface to apply that six rules.
Syntax: load_os_api()

Configure_chkui(): To specify, interest of tester to test required six rules in that six.
Syntax: configure_chkui(TRUE/FALSE, TRUE/FALSE, TRUE/FALSE, TRUE/FALSE,
TRUE/FALSE, TRUE/FALSE);

lbl_chk=TRUE; checks capital letter of labels on controls.


ok_can_chk=TRUE; checks existence of OK/Cancel buttons.
sys_chk=TRUE; checks existence of system menu.
text_chk=TRUE; checks if all text of controls is visible.
overlap_chk=FALSE; checks that controls do not overlap.
align_chk=FALSE; checks alignment of controls.

Page 88 of 132
Software Testing Material

Note: Orders of rules is mandatory.

Check_ui(): WinRunner use this function to apply configured rules on specified window.
Syntax: check_ui(“windowname”);
The above three functions are not built in functions. But developed by Mercury Interactive as
a system defined compiled module. Compiled module means that a permanent .exe of user
defined functions.
Navigation: Open application build on desktop, create menu, rapid test script wizard, click
next, show application main window, click next, select user interface test, click next, specify
sub menu symbols(>>, <<, …), click next, specify learning mode (Express /
Comprehensive), click learn, after learning, say yes/no to open your application during
winrunner launching, click next, remember paths of startup script and GUI map file, click
next, remember the path of UI testing, click ok, specify true for required rules, click run,
analyze the results manually.
Regression Testing: Receive modified build from development team, GUI regression,
Bitmap regression, Real regression to ensure bug fixing and resolving.

Development Team released Modified Build

GUI Regression Find Screen


level differences
between old and
Bit Map Regression new builds

Regression Test to ensure that modification

To find screen level changes: GUI Regression, Bitmap Regression.

From the above process, test engineer performs GUI regression and bitmap regression before
perform functionality level regression. To perform this preliminary level verification we can
use WinRunner concepts in RTSW(Rapid Test Script Wizard).

GUI Regression Testing: To find objects properties differences between old build and new
build, we can use this option in RTSW.

Old New

GUI Check Points


Navigation: Open old build on the desktop, create menu, Rapid Test Script Wizard, click
next, show application main window, click next, select use existing information, click next,
select GUI regression test, click next, remember the path of test script, click next, click ok,
close old build, open new build, click run, analyze results manually.

Page 89 of 132
Software Testing Material

Bitmap Regression Testing: To find image objects level differences between old build and
new build, we can use this option in RTSW.

Old New

Bitmap Check Points

Navigation: Open old build on the desktop, create menu, Rapid Test Script Wizard, click
next, show application main window, click next, select use existing information, click next,
select Bitmap regression test, click next, remember the path of test script, click next, click ok,
close old build, open new build, click run, analyze results manually.

Note: After receiving modified build testing team plans functionality regression after
completion of GUI regression and Bitmap regression. In this scenario, GUI regression is
mandatory and bitmap regression is optional.

Exception Handling: A non-modifiable runtime errors is called exception. To handle testing


exceptions WinRunner provides three types of handlers.
• TSL Exceptions
• Object Exceptions
• Pop-up Exceptions

TSL Exceptions: These exceptions are raised when specified TSL statement returns
specified error code.
To create TSL exceptions we can follow below navigations.
Tools, exception handling, select exception type as TSL, click new, enter exception name,
select expected TSL function, select expected return code, enter handler function name, click
ok, click paste, click ok after reading suggestion, click close, record required navigation to
recover expected situations as function body, make it as compiled module, write load
statement in startup script of WinRunner.

Public function nagaraju(in rc, in func) { printf(func &” returns “&rc); }


Object Exceptions: TSL exceptions depend on corresponding TSL statement and return
code. But all negative situations are not suitable to define depends on the TSL statement and
return code. Some of the negative situations defined by tester with respect to object
properties. Object exceptions raised when specified object property is equal to our expected.

Page 90 of 132
Software Testing Material

Build

--
--
--
Down --

Test Script
Enabled

--
-- Handler
--

To create this type of exceptions, we can follow below navigation.


Tools, Exception Handling, select exception type as object, click new, enter exception
name, select traceable object, select property with expected, enter handler function name,
click ok, click paste, click ok after reading suggestion, click close, record recoverable
navigation, make it as compiled module, write load statement in startup script of
WinRunner.
Public function nagaraju (in win, in obj, in attr, in val) { printf(“ Enabled “); }
Pop-up Exceptions: These exceptions raised when specified window comes to focus
during test execution we can use this type of exceptions to skip unwanted windows during
test execution.
Navigation: Tools, exception Handling, select exception type as Pop Up, click new, enter
exception name, show that unwanted window, specify handler action, click ok.

To administrate exceptions WinRunner provides the below TSL functions


Exception_off(“ Exception Name ”) ;
Exception_off_all() ;
Exception_on(“ Exception Name ”) ;

Note: When you create exception, by default exception is ON.


#checks capital letter of labels on controls.
ok_can_chk=TRUE; #checks existence of OK/Cancel buttons.
sys_chk=TRUE; #checks existence of system menu.
text_chk=TRUE; #checks if all text of controls is visible.
overlap_chk=FALSE; #checks that controls do not overlap.
align_chk=FALSE; #checks alignment of controls.
Rational Robot
 Developed by Rational
 Also known as SQA Robot
 Functionality testing tool like as WinRunner
 Supports c/s and web technologies
 Records our business operations in Rational Basic(RB). RB is like as VB

Page 91 of 132
Software Testing Material

Win Runner Rational Robot


1 Developed by Mercury Interactive Rational
2 Records operations in Test Script Language (TSL) Rational Basic(RB)
3 Recording language like C VB
4 Learning Auto Learning and Pre Implicit learning
Learning (Recognizes objects based
on Mswid)
5 Recording Context Sensitive and Analog Object Orientation and
Mode Low Level Recording
Record Menu, Turn to
other Mode.
Note: In Low Level
recording Robot records
the mouse pointer
movements along with
time
6 GUI CheckPoint For Single Property Insert, TestCase (Check
For Object / Window Point), object properties,
For Multiple Objects save check point, select
testable object, specify
expected for required
properties.
Robot allows one
checkpoint for one object.
7 Bit Map CheckPoint For Object/Window Insert, TestCase,
For Screen Area Window/Region Image.
8 Database CheckPoint Default, Custom, RunTime Not Applicable
Record Checkpoint
9 Text CheckPoint Get Text Insert, TestCase, Alpha
From Object/Window, Numeric ( Textbox /
From Screen Area Inbox), Clipboard ( Copy
content), Object
Data(List, Menu, Table,
Data Window and Active
X)

10 Window Existence Win_exists(“Window Insert, TestCase, Window


Name”, time); existence, save check
point, select testable
window, click ok.
11 File Comparison File_compare(“path of file1”, Insert, Testcase, File
“path of file2”,”path of comparison, save
file3”); checkpoint, Browse file1
& file2, click ok.
12 File Existence File_open(“path of file1”, Insert, Testcase, File
Mode); Existence, save
checkpoint, Browse
Testable file, click ok.
13 User Defined Pass/Fail tl_step(), printf() Insert, right click to ,
specify result type (Pass,

Page 92 of 132
Software Testing Material

Fail, Warning, None),


click ok.
14 Batch Testing Call “TestName”(); Insert, Call test
Call “Path of Test”(); procedure, select required
subtest, click ok.
Note: Robot doesn't allow
parameter passing.
15 Open Project Invoke_application(); for .exe Insert, start application,
files browse application path,
click ok.
16 Synchronization Wait(), change runtime Delayfor(), No Runtime
settings, For Object/Window settings but by default 10
property, For Object / seconds, insert, wait
Window Bitmap, for Screen status, testcase, object
area properties, For the last
two we have Positive /
Negative region.
17 Login No login window One login window
18 Saves Test names with Noname1, noname2 … Test1,test2, …

Silk Test 5.0


 Developed by Segue
 Also known as SQA Robot
 Functionality testing tool like as WinRunner, Rational Robot and QTP
 Supports c/s and web technologies
 Records our business operations in 4TL(Four Test Language) like as java
 Follows single thread of process (Learning, Recording, Check points, edit script are
not separate. All are done at a time)

Navigation: Start, Programs, Silk test, file menu, click new, click logo, click next, browse
manual test path, click next, select new test frame or existing test frame, click next, read
suggestions, click next, open window by window manually, click return to wizard, click next,
read suggestions for recording, click next, record out business operations, set mouse pointer
on required object and click ctrl + Alt to create check point(Property, method, Bitmap), click
ok, continue recording, insert checkpoints like as above, click done to stop recording, click
next, set application base state, click run test, click close, analyze results manually.

URL’s testing:
Enter Base URL, specify depth to walk, click press, analyze results manually ( Red color not
working, black color working)

QTP (Quick Test Professional)

Some of the professionals are also calling it as Quick Test Pro. The present version is QTP
6.5. WinRunner does not support the ERP and .Net.

Page 93 of 132
Software Testing Material

• Developed by Mercury Interactive.


• Derived from WinRunner.
• Supports Client/Server, Web Applications, ERP and Multimedia Technologies (Maya,
Flash … like dynamic images) for functionality testing.
• Records our business operations in VBScript.

Learning: Automation starts with learning. Like as WinRunner QTP supports auto learning
only. During recording QTP creates recognization entries for objects and windows.
[In WinRunner every entry is maintained in GUI Map Editor. Every entry consists of logical
name and physical description.
In WinRunner entries are maintained in two ways.
Global GUI Map Editor and Per Test Mode.

Global GUI Map Editor:


Advantage: Entries can be used in more than one test.
Drawback: It won’t provide auto save and open.

Per Test Mode:


Disadvantage: The entries can’t be used in more than one Test.
Advantage: It provides auto save and open.]

QTP maintains entries in object repository. [Repository is a Folder or Directory and it is


created by user and saved by system] This repository maintains auto save and open.

Path: Tools -> Object Repository.

You will change the entries of GUI Map in 6 ways.

7. Wild Card Character


8. Regular Expressions
9. Virtual Object Wizard
10. Mapped to standard class
11. GUI Map Configuration
12. Selective Recording

1. The Wild Card characters can be used to organize entries in QTP (like WinRunner
using !,* )

2. QTP also supports Regular Expressions like WinRunner

3. Virtual Object Wizard: It is used when one object is not recognized by the tool, then in
WinRunner it recognizes by using this feature. But in Winrunner to this task it takes more
time, where as in QTP also same process but with small navigations.

4. Mapped to Standard Class: This is used when one object is recognized but the required
properties are not coming to that object. Then map this object to any of the standard matching
object and get the required properties.

Page 94 of 132
Software Testing Material

5. GUI Map Configuration: Some times two objects may have same logical and physical
names also. To differentiate one object from other in WinRunner it internally uses the
MSWID. But in QTP we have to follow the given path,
Tools -> Object Identification -> select object Type -> Select distinguishable properties into
mandatory and assistive -> click Ok.

Note: Here we can maintain MSWID as assistive. Because every two objects consists of
different mswids.

6. Selective Recording:
In WinRunner if u have more than one application on the desktop at the time of
recording it may record about the unnecessary application details also in the TSL if u didn’t
specify exactly what application u need. For this type of situations in WinRunner v are
specifying it explicitly using this path.

Settings -> General Options ->


Where as in QTP if u click start recording it asks for whither u want to do Selective
recording or not. If u choose selective recording then it displays one window in which u have
to choose the application and working directories.
Like as WinRunner QTP also supports the static recording. This option appears when
u click recording.

Recording: QTP records our business operations in VBScript. By default this tool starts
recording in general mode. If you want to record mouse pointer movements, v can use

Test Menu -> Analog / Low level recording.

In Winrunner two modes are available Context sensitive or Analog mode.


In QTP three modes are available like General, Analog and Low level.

Check points: To conduct functionality testing on different technology applications, QTP


provides below check points.

1. Standard checkpoint:
To test the behavior and input domains of objects we can use this
checkpoint. This checkpoint allows one object at a time.

Select position in script -> insert menu -> checkpoint -> standard checkpoint ->select
testable object -> click ok after confirmation -> select required properties with expected
->click ok.

In QTP for one property u can give 2 values like constant expected or parameter
expected.

2. Bitmap Checkpoint:
QTP supports static and dynamic images to compare. The maximum
timeout for picture elements is 10 seconds.

3. Database Checkpoint:

Page 95 of 132
Software Testing Material

QTP provides backend testing facility through this checkpoint like as WinRunner default
check.

insert -> checkpoint ->Database Checkpoint -> specify sql statement -> click create to
select DSN -> write select statement -> click finish.

4. Text Checkpoint:
To capture object values into variables we can use this option. Vbscript supports variables
declaration.

5. TextArea checkpoint:
To capture static text from screens we can use this option.

Data Driven Testing:


Like as WinRunner, QTP also supports retesting with multiple testdata.
There are 3 possibilities such as Dynamic Testdata, from FrontEnd Grids and Excel Sheet.

1. Dynamic Testdata: In WinRunner we will be use Create_input_dialog(“Dialog


Message : ”) to do same dynamic testdata under data driven testing.
But in QTP we use inputbox(“Message”); function to read data from the user.

Var = inputbox(“Message”);

2. From FrontEnd Grids:


Depends on listbox, menu, activex, table and data window.
Test engineer conducts retesting. If u want to search any vbscript functions follow below
navigation.

insert -> step -> method -> select required object -> click ok after confirmation -> click
next -> enter arguments -> click next.

3. Excel Sheet:
Create testscript for one input -> insert testdata into excel sheet
columns -> tools menu -> data driver -> select position to use or replace excel sheet
columns -> click parameterize -> click next -> select required column name -> click
finish.

Batch Testing:
Like as WinRunner QTP also allows batch testing. To form batches QTP
supports WinRunner Tests also. Batch Testing can be done in 2 ways

1. QTP Test to QTP Test: Insert -> call to action -> browse subtest -> specify
parameter data using excel sheet columns -> click ok.

2. QTP Test to WinRunner Test: Insert -> call to WinRunner Test-> browse the path of
test -> click ok.

Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto
learning and from WinRunner7.0 onwards auto learning is possible.

Page 96 of 132
Software Testing Material

Synchronization points: To define time mapping between QTP and project we can follow
below navigation.

insert -> step -> synchronization piont [this is exactly equal to for object/window property in
WinRunner]-> select indicator object -> click ok after confirmation -> specify expected
property with value -> specify maximum time wait -> click ok.

Recovery Scenario Manager: This concept is equal to exception handling in WinRunner.


Through this concept, QTP recover from executable scenarios with required handler.

Tools -> Recovery Scenario Manager -> Click New -> click next -> Select Trigger type
( pop up, object state, application crash, test run error ) -> define the situation with handler
-> click ok.

Extra Features in QTP:


• Faster than WinRunner to create a test.
• Supports .Net, SAP, People Soft, Oracle Applications, Multimedia and XML as extra
than WinRunner.
It records business operations in vbscript

Test Director 6.0

• Developed by Mercury Interactive


• Test Management Tool
• Working as Client / Server application

Project Administrator

Ms – Access,
SQL Server,
Test Director
Oracle

Project Administrator: This part is used by test lead, to create new database areas, to store
new projects testing documents and to estimate test status of an on going project.

Create Database: Start, programs, TD 6.0, Project Administrator, Login by Test Lead,
project menu, new project, specify location of database(Private, Common), click create, click
ok.

For one project data database, test director tool maintains tables and views.

Estimate Test Status: start, programs, TD 6.0, project Administrator, login by test lead,
select project name in list, select project name in list, click connect, click Extension symbol
in front of project name, select required table in list, extend query if required, click Run SQL,
analyze the results manually to estimate the test status.

Page 97 of 132
Software Testing Material

Test Director: This part is used by the test engineer to store corresponding test documents
into corresponding database, created by Test Lead.

Start, programs, TD 6.0, Test Director, Select project Name, Login by Test engineer

• Plan Tests
• Run Tests
• Track Defects.

Plan Tests: During test cases writing for responsible modules, test engineers use this part
to store their testcases into database for future references.

Create Subject: Plan Tests, click Folder New, Enter Responsible module name as Test
Script, click Ok.

Create Sub Subject: Plan test, select subject name, click folder new, enter sub subject
name, click ok.

Create TestCase: Plan Test, select subject name, select sub subject, click Test New,
select Test type, enter Test name, click ok.

Details: After completion of testcase creation, test engineer maintains below details for
that testcase.

TestCase ID, TestSuit ID, Priority, Test Environment, Test Duration, Test Effort, Test
Setup and Testcase Pass/Fail Criteria.

Design Steps: After typing required details for testcase we can prepare a step by step
procedure for that testcase to execute.

Design steps, click new, enter step description with expected, click new to create more
steps, click close.

Test Script: For automation test scripts, test director provides launch button to open
WinRunner.

Click launch, set application base state for that test, record required navigation, insert
required check points, click stop recording, click save.

Attachments: To maintain extra information for test cases, test engineer use this part. It
is optional.

Attachment, Click File/Web, Browse required file path to attach, click open.

• Run Tests: After receiving a stable build from the development team concentrate on
test execution. TD provides a facility to create automated TestLog during testcase
execution.

Create Batch: Run Tests, click testset builder, click new, enter suit ID, click ok, select
required tests and add into batch, click close.

Page 98 of 132
Software Testing Material

Execute Automated Test: Select automated test in batch, click automated, set application in
base state as per that test, click run, tools menu, test results, file menu, open, browse executed
test, analyze results manually, close winrunner, change test status to Passed / Failed depends
on results analysis.

Manual Test Execution: select manual testing batch, click manual, click start run, set
application in base state, run every step manually, specify status for every step, click close
after execution of last step.

• Track Defects.

During test execution, test engineer use this part to report defects to development team.

Track defects, click add, fill fields in the defect report, click create, click close, click mail,
enter To Mail ID, click ok.

Test Director Icons:

Filter: To select required tests or defects in existing list we can use filters concept.

Navigation: click filter icon, specify filter condition, click ok.

Sort: To arrange defects in a specified order in a list, we can use this sort icon.

Navigation: Click Sort Icon, select required filed, specify sort direction
(Ascending / Descending), click ok.

Columns: We can use Icon to select specific columns in display list.


Navigation: Click Columns Icon, select required columns into visible list, click ok.

Report: To create printouts we can use this icon to create hard copies for defects.

Navigation: Click Report Icon, Specify Report Type, info or table, specify printout type,
click ok, click print per every page.

Test Grid: List of testcases coming in single window, under all subjects and sub subjects.

This option provides list of all test cases under all subjects and sub subjects.
Quick Test Professional
 Developed by Mercury Interactive
 Also known as Quick Test Pro
 Functionality testing tool like as WinRunner
 Extension of WinRunner
 Supports c/s and web technologies
 Supports Client/Server, Web Applications, ERP and Multimedia Technologies (Maya,
Flash … like dynamic images) for functionality testing.
 Records our business operations in VBScript.

Page 99 of 132
Software Testing Material

 Supports launching of WinRunner to execute TSL scripts

Win Runner Quick Test Professional


1 Developed by Mercury Interactive Mercury Interactive
2 Records operations in Test Script Language (TSL) VBScript for expert view.
And hierarchical steps in
tree view
3 Recording language like C VB
4 Learning WinRunner supports Auto QTP supports only Auto
Learning and Pre Learning to learning to recognize the
recognize the objects and objects and windows in your
windows in your application application
5 Entry Maintenance Maintains that recognized Maintains that recognized
Location entries in GUI Map to edit entries in Object Repository
that entries we can follow to edit that entries we can
below navigation follow below navigation
Tools, GUI Map Editor Tools, Object Repository
6 Types of entry Global GUI Map file / Per Global entries with auto save
maintenance Test mode to maintain entries and auto open into object
longtime repository
7 Wild Card Character Uses the Wild Card Uses the Wild Card
characters (! ,. * )in characters (! ,. * )in
reorganization entries when reorganization entries when
that window labels are that window labels are
variating with respect to input variating with respect to
input
8 Regular Expressions Uses the regular expression Uses the regular expression
entries when object labels are entries when object labels
variating are variating
Tools , GUI Map Tools, object identification,
configuration, click select object type, specify
configure. mswid as assistive properties
, click ok
9 GUI Map Configuration Tools, GUI Map Tools, Object Identification,
configuration Select Object type, specify
When more than one object MSW ID as assistive
consists of same physical property, click ok
description (MSW ID as
optinal)
10 Mapped to standard class Tools, GUI Map Tools, Object Identification,
configuration, click add click add
When winrunner does not Select non testable object,
returns all testable properties specify environment, click
to objects (Mapped to ok.
standard class)
11 Virtual Object Wizard Tools, virtual object wizard
Tools, virtual objects, new
virtual object.
when any object is not When objects are not
recognized by WinRunner recognized by QTP

Page 100 of 132


Software Testing Material

12 Selective Recording Settings, general options, File Menu, New Test, Click
record tab, click selective Start Recording, selective
recording recording window appears
When we want to record our
business operations on
specific applications
13 Recording Recording allows two types Recording allows three types
of modes such as Context of modes such as general,
Sensitive and Analog Mode Analog Mode and low level
recording
Default mode is Context In Low level recording QTP
Sensitive and F2 is short key records mouse pointer
to change from one mode to movements on desktop
other along with time as extra
General mode is default and
allows below Shortcuts
Start Recording: F3
Low level Recording: Ctrl +
Shift + F3
Analog Recording: Ctrl +
Shift + F4
14 GUI CheckPoint For Single Property Select position in script ->
For Object / Window insert menu -> checkpoint ->
For Multiple Objects standard checkpoint ->select
testable object -> click ok
after confirmation -> select
required properties with
expected ->click ok.
Note: In WinRunaner check points allows constant values are expected. But QTP
checkpoints allows constant and parameter values as expected [Ex: Expected values in
Excel column]
X = create_input_dialog(“xx”);
Button_check_info(“OK”,”enabled”,x);
Note2: In QTP standard checkpoint allows one object at a time to test
15 Bit Map CheckPoint For Object/Window Insert, checkpoint, bitmap
For Screen Area check point, select testable
WinRunner supports static image [static of dynamic],
images only click ok after confirmation,
click select area if required,
click ok
Note1: QTP supports static and dynamic images to compare when you select multimedia
option in add-in manager
Note2: It supports dynamic images play up to 10 seconds as maximum.
16 Database CheckPoint Default, Custom, RunTime insert -> checkpoint
Record Checkpoint ->Database Checkpoint [like
win runner default
checkpoint -> specify sql
statement -> click create to
select DSN -> write select
statement -> click finish

Page 101 of 132


Software Testing Material

Note: QTP supports database testing w r t database content


17 Text CheckPoint Get Text: insert -> checkpoint ->Text
From Object/Window, Checkpoint & Text area
From Screen Area Check point
From Selection Web
Web test checkpoint only

18 Functions that will be Obj_get_text(“ Object Option explicit


generated in Text Name”,variable); Dim vnames ….
Checkpoint Window(“window name”).
Obj_get_text(“ Object name Winedit(“Object Name”).
”,variable,x1,y1,x2,y2); GetVname
Window(“window name”).
Web_obj_get_text(“object Winedit(“Object
name”, “#Row no “, Name”,x1,y1,x2,y2).
“#Column no”, variable, GetVname
“text before”, “text after”, Window(“frame name”).
time to create); Winedit(“Object Name”,
“text before”,”text after”).
Web_frame_get_text(“frame GetVname
name”, variable, “text
before”, “text after”, time to
create);

19 Data Driven Testing DDT/Retesting in 4 ways: DDT/Retesting in 3 ways:


Methods Dynamic test data submission Dynamic test data
Through flat file (notepad) submission
From front end grids (List From front end grids (List
box) box)
Through excel sheet Through excel sheet
20 Dynamic test data N = Create_input_dialog(“ Option explict
submission Message”); Dim vname
For(I=1;I<=n;I++) Vname = inputbox(“
{ Message”);
For I=1 to n step 1
}
next
21 Through flat file File_open(); Through flat files data
File_getline(); driven testing is not
File_compare(); applicable
File_printf();
File_close();
22 From front end grids List, menu, active x, label, List, menu, active x, label,
data window data window
23 Through excel sheet Tools, data driver wizard Create testscript for one
input -> insert testdata into
excel sheet columns -> tools
menu -> data driver -> select
position to use or replace
excel sheet columns -> click

Page 102 of 132


Software Testing Material

parameterize -> click next


-> select required column
name -> click finish.
24 Searching for required Create menu, Function If u want to search any
functions Generator vbscript functions follow
below navigation.

insert -> step -> method ->


select required object ->
click ok after confirmation
-> click next -> enter
arguments -> click next.
25 Batch Testing Call “TestName”(); To form batches QTP
Or supports WinRunner Tests
Call “Path of Test”(); also. Batch Testing can be
done in 2 ways

1. QTP Test to QTP Test:


Insert -> call to action ->
browse subtest -> specify
parameter data using excel
sheet columns -> click ok.

2. QTP Test to WinRunner


Test: Insert -> call to
WinRunner Test-> browse
the path of test -> click ok.

Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto
learning and from WinRunner7.0 onwards auto learning is possible.
26 User Defined Functions User Defined Functions User Defined Actions
Repeatable navigations in Repeatable navigations in
application recorded as application recorded as
functions. To make it as actions to create one
permanent .ext we can use reusable action.
compiled module concept We can follow below
navigation
Insert, new action, enter
action name with
description, select reusable
action, click ok, record
repeatable navigation in
your application
Note: To call that reusable action in required test, we can use insert , call to action
27 Synchronization point Wait insert -> step ->
Change runtime settings synchronization piont [this is
For object/window exactly equal to for
For object/window bitmap object/window property in
For screen area WinRunner]-> select

Page 103 of 132


Software Testing Material

indicator object -> click ok


after confirmation -> specify
expected property with value
-> specify maximum time
wait -> click ok.
28 Exception Handling TSL Tools -> Recovery Scenario
Pop Up Manager -> Click New ->
Object click next -> Select Trigger
Web for web only type ( pop up, object state,
application crash, test run
error ) -> define the situation
with handler -> browse
reusable action for recovery-
> click finish.
29 Technology Supported Does not supports .Net, QTP supports .Net, XML,
XML, SAP, People Soft, SAP, People Soft, Oracle
Oracle applications and applications and multimedia
multimedia objects for testing objects for testing

Page 104 of 132


Software Testing Material

Quality Assurance Quality Control

They are mainly responsible for For detection of defects


prevention of defects Responsible for implementation of the
Identifying efficient life cycle models, life cycles, methodologies etc… for the
process, methodologies etc… according testing of the application.
to quality standards. Prepare the reports, documents,
Review the reports and documents that according to the standards or guidelines
are prepared by QC team or whole given by QA team
project team.
The major concern is on the process The major concern is product being
being implemented. developed
Are we following the right method for Product properly done or not.
developing or not.
Verification Validation

ISO (International Organization for Standardization)

ISO is given for all companies.


CMM is given for only software companies.
6-Sigma is for all companies.
If you implement the 20 clause (8 sections) then u will get ISO.

In the year 1947, non government organizations joined together and formed ISO. There are
145 countries are there in ISO. India is among them. ISO is the Greek word. This is derived
from the word ISOCESS. Actually ISOCESS means equal or total. It is equal for all in the
world India, USA …

ISO 9000 – Guidelines


9001, 9002, 9003, 9004 – Certifications
Whenever you want to get certifications first you have to follow certain guidelines.

9001 – For companies design, development, testing and inspection.


9002 – Except design remaining activities. (Companies called as Production)
9003 – Testing and inspection only.
9004 – Continuous improvement.

9001:2000 (Year or Version)


For every six years they are releasing the version. Latest version is 2000. And we can expect
the next version in 2007. Whither it is a hotel or software company they can get 9001. But by
verifying the scope we can confirm what type of company.
Now a days there is no 9002 and 9003. They are giving only 9000, 9001 and 9004.

Page 105 of 132


Software Testing Material

How to get Certification:


BVQI – Beaurea of Verta Quality International (USA based company, branch in Hyd)
ICL – International Certification Limited (USA based company, branch in Secunderabad)
STQC – Software Testing, Quality Testing

If u want to get Certification first approach any one of the above company they will say
implement 20 clause. Next they will come to audit and finally certifies.

If u don’t know how to implement 20 clause they are conducting training through company
as External Auditor 3 months course. They will conduct this. Internal Auditor for Rs 25,000
and they will conduct with in 4-5 days.

Difference between the External or Lead auditor and Internal auditor is the former can work
in two or three companies in a day. The later will works in only one company.

Format – The structure is studied. They visit all the departments and prepare this.
Check list – What are the requirements
Procedure – Work based on 20 Clause.
Procedure Manual – Prepare procedure and distribute to all departments and inform them to
implement it to get the Certification.

What ever the work you are doing you have to prepare the documents. Reasons are
1. Future reference
2. Employees may leave organization

Generally auditor should have 10+Exp and 5 cycles of implementation.

Procedure Manual

Procedure

Check List

Format

NCR – Non Conformance Report


Types of Certifications:
1. External Audit
2. SURVELLANCE Audit
3. Recertification

Page 106 of 132


Software Testing Material

1. External Audit: To renewals for every 3 years


2. SURVELLANCE Audit: Every 6 months they will come and checks. But they
informs before coming. They will issues one NCR if u didn’t follow and they once
again audits the same issue after 3 months. They gives 3 or 4 NCRs and finally
cancels the certification.
3. Recertification: If they cancels then go for recertification

SEI-CMM (Software Engineering Institute – Capability Maturity Model)

SEI-CMM levels:
This is given to software companies only
There are five levels are there in CMM like level 1,2,3,4,5
There are different CMMs are there like SEI-CMM also called as Software CMM, PCMM,
CMMI- CMM for Integration.

In the year 1987, MIKE PAULK and BILL CURTIS (They are working as faculty in
CARNEGIE MELLON University, Pits burgh, USA) formed together. They released CMM
version1.0 from the SEI. They have observed the ISO, in ISO software organizations are not
getting any special facilities. So they formed SEI and released CMM.
In CMM auditors are called as Assessors. Anybody can become as Assessors but you have to
attend training classes in Chennai or Mumbai. KPMG etc... Institutes are conducting this
course.

There are two types of companies


Disciplined / Matured Company
Indiscipline / immature Company

1. Initial 2. Repeatable 3. Defined 4. Managed 5. Optimized

Adhoc Project Software Quality Hitech


Management Change Management Change
Management

Adhoc Discipline Change Predictable Hitech

There are five levels of CMM, each level has got number of processes. For example level2
has the process as project management. Each process is called as KPA.
If an organization implements all the KPA’s then based on them it is given a level.

Infosys was assessed at level4 in Dec 1997 and at level5 in Dec 1999.

Page 107 of 132


Software Testing Material

PCMM: People CMM. It also got 5 levels. This is mainly deals with the HR principles. For
selecting and recruiting they are having one structure. That will be given by this.

CMMI: CMM for Integration. They use SEI CMM, Systems engineering principles and IPD-
CMM (Integrated Product Development).

Small company can get up to ISO and CMM Level-3, PCMM Level-3 and CMMI.

CMMI is the latest technology and most of the companies are trying to get this.

6 σ (Six Sigma)
This is given to all companies.
This is derived from Greek letter ‘σ’ which means Standard Deviation.
6 σ is a metric which gives various standard deviations
The greater the number before ‘σ’ the less will be the defect in the process variation, more
will be quality and customer satisfaction.
ISO, CMM and 6 σ all are for customer satisfaction.
If it is 5 σ the error may be 265 in 1 million LOC.
If it is 6 σ the error may be 3 in 1 million LOC.
PPMQ – Parts for Proper Million.
DMAIC – Define, Measure, Analyze, Improve and Control.
Generally any company first does this DMAIC and next goes for 6 σ.
DFSS – Design for Six Sigma. This is for software organizations.

In 6 σ you will be given Champion, Major Black Belt, Black Belt, Green Belt, White Belt,
Orange Belt.

Champion – Owner of the company.


Black belt holder will train the Green belt holder.

6 σ companies – Satyam, Motorola, Wipro, TCS etc… But the first company in Hyderabad
which got this one is GE.

CMM Levels:

What is CMM:
It defines how software organizations mature or improve in their ability to develop software.

This model was developed SEI of Carnegie Mellon University in late 80s.
Infosys was addressed at level 4 in Dec 1997 and at level 5 in Dec 1999.

Why CMM:
CMM is a software specific model. CMM describes how software organizations can take the
path of continuing improvement, which is so required in this highly competitive world. Keep
improving is CMM Mantra.
Level1: initial or Ad-hoc. There are no KPAs in this level.
Level2: Repeatable. There are 6 KPAs in this level. KPAs at this level look at project
planning and execution.

Page 108 of 132


Software Testing Material

Level3: Defined. There are 7 KPAs in this level. Organizational process is the focus area
here.
Level 4: Managed. There are 2 KPAs in this level. Understanding of data
Level 5: Optimizing. There are 3 KPAs in this level. The focus here is continual
improvement.

As we move from level 1 to 5, the project risk decreases and quality and productivity
increases.

(KPA can be compared to Clause in ISO standards).

Level1: Initial or Ad-hoc. There are no KPAs in this level.


Level 1 is immature state. The software process is characterized as adhoc, and occasionally
even chaotic. Few processes are defined, and success depends on individual effort. Here there
is no objective basis for judging product quality or for solving product or process problems.
Therefore product quality is difficult to predict. Activities intended to enhance quality such as
reviews and testing are often curtailed or eliminated when projects fall behind schedule.

Highlights of this level:

 The processes with in this level are highly unstable and unpredictable.

 The projects are purely person dependent. Ie, when the persons involved leave the
project or the company, things come to a halt. Also the performance depends on the
capabilities of the individuals rather than the organizational capability.

As we move from level1 to level5, the project risk decreases and quality and productivity
increases.

Level2: Repeatable. There are 6 KPAs in this level. KPAs at this level look at project
planning and execution.

Repeatable, as the word reveals, means that processes employed in the project are repeatable.
Basic project management principles are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier success on projects
with similar applications using best practices from past projects.

Projects in these organizations have installed basic software management controls.

Highlights of this level:


 Realistic project commitments are based on the results observed on previous projects
and on the requirements of the current project.
 The project managers for a project track software costs, schedules, and functionality.
The problems meeting commitments are identified when they arise.
 The projects process is under the effective control of a project management system,
following realistic plans based on the performance of previous projects.

Requirements Management:
To establish a common understanding between the customer and the project team

Page 109 of 132


Software Testing Material

It involves establishing and maintaining an agreement with the customer on the requirements
for the software project.
Goal: software plans, products, and activities are kept consistent with the system
requirements allocated to software.

Software Project Planning:


This involves establishing reasonable plans for performing the software engineering and for
managing the software project. Software project planning involves developing estimates for
the work to be performed, establishing the necessary commitments, and defining the plan to
perform the work.

Goal: software estimates are documented for use in planning and tracking the software
project.

Software Project Tracking:


To provide the adequate visibility into actual progress so that management can take effective
actions when the software project’s performance deviates significantly from the software
plans.
Software project tracking and oversight involves tracking and reviewing the software
accomplishments and results against documented estimates, commitments, and plans and
adjusting these plans based on the actual accomplishments and results.

Goal: Actual results and performances are tracked against the software plans.
A documented (Project Plan) is used for tracking.

Software Subcontract Management:


The purpose of software subcontract management is to select qualified software
subcontractors and manage them effectively.

Software Quality Assurance


The purpose of the Software Quality Assurance is to provide management with appropriate
visibility into the process being used by the software project and of the products being built.
Software Quality Assurance involves reviewing and auditing the software products and
activities to verify that they comply with the applicable procedures and standards and
providing the software project and other appropriate managers with the results of these
reviews and audits.

Goal: Software Quality Assurance activities are planned.

Software Configuration Management:


The purpose of the Software Configuration Management is to establish and maintain the
integrity of the products of the software project throughout the project’s software life cycle.

A software baseline library is established containing the software baselines as they are
developed. Changes to baselines and the release of software products built from the software
baseline library are systematically controlled via the change control and configuration
auditing functions of Software Configuration Management.

Page 110 of 132


Software Testing Material

Goal: Software Configuration Management activities are planned. Selected work products are
identified and controlled. Changes to work products are controlled.

Level2 is concentrated on project level processes, Level3 looks from the organizational view
point.

Level3: Defined.
The software process for both management and engineering activities is documented,
standardized, and integrated into a standard SW process for the organization (E.g. Software
Configuration Management process). All projects use approved and tailored versions of the
organizations standard software process for developing and maintaining software. Data and
information from projects is regularly and systematically collected and organized so that the
same can be reused by other projects.

There are 7 KPAs in this level. Organizational process is the focus area here.

Organizational Process Focus:


The purpose of the Organizational Process Focus is to establish the organizational
responsibility for software process activities that improve the organizations overall software
process capability.

The important goal of this KPA is software process development and improvement activities
are coordinated across the organization.

To do an effective job of identifying and using the best practices, organizations must
establish a group with that responsibility and build a plan for how the organization will
improve its process. Such as a plan should include periodic assessments of the organizations
process maturity, leading to plans for improvement in capability. This process engineering is
done by SEPG, which looks out for the interest of every project in the organization.

Organizational Process Maturity:


The purpose of this KPA is to provide a usable set of software processes assets that improve
process performance across projects. This involves developing and maintaining the
organization’s standard software process, along with related process assets.
Some goals of the KPA are to have a standard software process for the organization.
Information related to the use of process by projects is collected and reviewed. Descriptions
of software life cycles that are approved for use by the projects are documented and
maintained. The organizations software process database is established and maintained.

Training Program:
The purpose of this KPA is to develop the skills and knowledge if individuals so they can
perform their roles effectively and efficiently.

Training Program involves first identifying the training needed by the organization, projects,
and individuals, then developing or procuring training to address the identified needs. Each
software project evaluates its current and future skills needs and determines how these skills
will be obtained. Some skills are effectively and efficiently imparted through informal
methods, where as other skills need more formal training methods to be effectively and
efficiently imparted.

Page 111 of 132


Software Testing Material

Integrated Software Management:


The purpose of Integrated Software Management is to integrate the software engineering and
management activities into a coherent, defined software process that is tailored from the
organizations standard software process.

Software Product Engineering:


The purpose of the Software Product Engineering is to consistently perform a well defined
engineering process that integrates all the software engineering activities to produce correct,
consistent software products effectively and efficiently. Software Product Engineering
involves performing the engineering tasks to build and maintain the software using the
projects defined products and appropriate methods and tools.

Level4: Managed. There are 2 KPAs in this level. Understanding of data


Level5: Optimizing. There are 3 KPAs in this level. The focus here is continual
improvement.

Software Testing 10 Rules

1. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results
and you won't have to mediate between two armed camps in your IT shop.

3. Formalize a testing methodology; you'll test everything the same way and you'll get
uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application
and better testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and
load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors.

Page 112 of 132


Software Testing Material

Configuration Management
What is configuration management?

Our systems are made up of a number of items (or things). Configuration Management is all
about effective and efficient management and control of these items.

During the lifetime of the system many of the items will change. They will change for a
number of reasons; new features, fault fixes, environment changes, etc. We might also have
different items for different customers, such as version A contains modules 1,2,3,4 & 5 and
version B contains modules 1,2,3,6 & 7. We may need different modules depending on the
environments they run under (such as Windows NT and Windows 2000).

An indication of a good Configuration Management system is to ask ourselves whether we


can go back two releases of our software and perform some specific tests with relative ease.
Problems resulting from poor configuration management

Often organisations do not appreciate the need for good configuration management until they
experience one or more of the problems that can occur without it. Some problems that
commonly occur as a result of poor configuration management systems include:
 the inability to reproduce a fault reported by a customer;
 two programmers have the same module out for update and one overwrites the other’s
change;
 unable to match object code with source code;
 do not know which fixes belong to which versions of the software;
 faults that have been fixed reappear in a later release;
 a fault fix to an old version needs testing urgently, but tests have been updated.
Definition of configuration management

A good definition of configuration management is given in the ANSI/IEEE Standard 729-


1983, Software Engineering Terminology. This says that configuration management is:
 “the process of identifying and defining Configuration Items in a system,
 controlling the release and change of these items throughout the system life cycle,
 recording and reporting the status of configuration items and change requests, and
 verifying the completeness and correctness of configuration items.”

This definition neatly breaks down configuration management into four key areas:
 configuration identification;
 configuration control;
 configuration status accounting; and
 configuration audit.

Configuration identification is the process of identifying and defining Configuration Items


in a system. Configuration Items are those items that have their own version number such that
when an item is changed, a new version is created with a different version number. So
configuration identification is about identifying what are to be the configuration items in a
system, how these will be structured (where they will be stored in relation to each other) the

Page 113 of 132


Software Testing Material

version numbering system, selection criteria, naming conventions, and baselines. A baseline
is a set of different configuration items (one version of each) that has a version number itself.
Thus, if program X comprises modules A and B, we could define a baseline for version 1.1 of
program X that comprises version 1.1 of module A and version 1.1 of module B. If module B
changes, a new version (say 1.2) of module B is created. We may then have a new version of
program X, say baseline 2.0 that comprises version 1.1 of module A and version 1.2 of
module B.

Configuration control is about the provision and management of a controlled library


containing all the configuration items. This will govern how new and updated configuration
items can be submitted into and copied out of the library. Configuration control is also
determines how fault reporting and change control is handled (since fault fixes usually
involve new versions of configuration items being created).

Status accounting enables traceability and impact analysis. A database holds all the
information relating to the current and past states of all configuration items. For example, this
would be able to tell us which configuration items are being updated, who has them and for
what purpose.

Configuration auditing is the process of ensuring that all configuration management


procedures have been followed and of verifying the current state of any and all configuration
items is as it is supposed to be. We should be able to ensure that a delivered system is a
complete system (i.e. all necessary configuration items have been included and extraneous
items have not been included).
Configuration management in testing

Just about everything used in testing can reasonably be place under the control of a
configuration management system. That is not to say that everything should. For example,
actual test results may not be though in some industries (e.g. pharmaceutical) it can be a legal
requirement to do so.

VERIFICATION AND VALIDATION (V&V)

Verification: Are we developing the right product?

Validation: Are we developing the product right?

Verification and Validation is the difference between 'What and How'

Two types of V&V

1. Static V&V
2. Dynamic V & V

Static V&V:

1. Technical Review
2. Inspection

Page 114 of 132


Software Testing Material

3. Code Walk through.

We are doing V&V in documents which is in papers. Static Verification corresponds to


verification and validation of products, when it is static. This includes all quality Reviews,
composition of the product. eg. Its structure, size and shape etc. That's why it is called as
Static V & V

Dynamic V&V:

In Dynamic V&V we are conducting Testing the application in real time with executables.
That's why it is called as Dynamic V&V.

SOFTWARE TESTING

Definition 1: Software Testing is the process of executing a program with the intent of
finding bugs.

Definition 2: Testing is a process of exercising or evaluating a system component, by


manual or automated means to verify that it satisfies a specified requirement.

The basic goal of the software development process is to produce a software that has
no errors. In an effort to detect errors, each phase ends with V & V activity such as Technical
review. But most of the V & V (review) is based on human evaluation and can't detect all
errors.

As testing is the last phase in the SDLC (Software Development Life Cycle) before
the final software is delivered, it has the enormous responsibility of detecting any type of
errors

Two basic approaches for software testing:

1.White Box Testing or Structural testing or Glass Box testing.


2. Black Box Testing or Functional testing

Combination of white box and block box testing is called as 'Gray box testing'

White Box Testing:


White Box Testing is done by the developers,
Developers have to do

1.Path testing
2.Condition testing
3.Data flow testing
4.Loop testing.

Software Engineers can derive test cases that

Page 115 of 132


Software Testing Material

1.Guarantee that all 'independent paths' within a module have been exercised at least once.
2.Exercise all logical decisions on their true and false sides.
3.Execute all loops at their boundaries within their operational bounds.
4.Exercise internal data structures to assure their validity.

We must go for 'white box testing' when

Typographical errors are random,


Logical errors and incorrect assumptions are inversely proportional to the probability
that a program path will be executed.

Usually all the organizations go for Block Box Testing. Because in block box testing, we
are checking the functionality of the application. In block box testing, structure of the
program is not considered.

For better customer satisfaction, we have to do white box testing first, then conduct block
box testing.

We will discuss about the BLACK BOX Testing, in a detailed manner.

BLACK BOX TESTING


Block box testing focuses on functional requirements of a software, taking no
consideration of detailed processing logic.

In Black box testing, testers attempt to find errors in the following categories:

1.Incorrect or missing functions.


2.Interface errors
3.Errors in data structures
4.Performance errors
5.Initialization and termination errors.

Levels of Black Box Testing


Faults occur during any phases in SDLC. Verification is performed on the output of
each phase. But some faults are likely to remain undetected by these methods. Theses faults
reflect in the code.

Testing is usually relied on to detect these faults, in addition to the faults introduced in coding
phase.

Due to this, different levels of testing are used in the testing process.

Clients Needs <---------> Acceptance Testing


| |
Requirement <---------> System Testing

Page 116 of 132


Software Testing Material

| |
Architecture &
Design <---------> Integration Testing
| |
Coding <---------> Unit Testing

From the service providers point of view the following are to be done.

1.Unit Testing
2.Integration Testing
3.System Testing

UNIT TESTING

In Unit Testing, Different modules are tested, against the specifications produced during
design for the modules.

Unit testing is essentially for verification of the code produced during the coding phase
and hence the goal is to test the internal logic of the modules.

Module interface is tested to ensure that information properly flows into and out of the
program unit under test.

Unit testing is the lowest level of testing


Individual unit of the software are tested in isolation from other parts of a program.

In UNIT TESTING, we have to do the following checks

1.Field level checks.


2.Field level validation
3.User Interface check
4.Functionality check.

Field Level Checks:

In Field Level checks, we have to do 7 types of checks.

Here we are checking a particular field in a screen or module, to check whether the field
accepts

1.Null characters
2.Unique characters
3.Length
4.Number
5.Date
6.Negative values

Page 117 of 132


Software Testing Material

7.Default values.

For Example, consider a Course Registration form that contains the following fields.

COURSE REGISTRATION FORM SCREEN

Option (Add/Modify/Delete) Drop down combo box. Funlty chk.

Type of Course Drop down combo box. Funlty chk.

Registration Number Number Field

Student Name Text Field

Address Text Field

Phone Number Number Field

Date Date Field

Time (Part/Full time) Drop down combo box. Funlty chk.

Timing (7-9am/9-11am/7pm-9pm) Drop down combo box. Funlty chk.

Student ID Automatic generation. Funlty chk.

Batch Code Automatic generation Funlty chk.

Push Button Save Button Funlty chk.

Push button Exit Button Funlty chk.

** Funlty chk. – Functionality Check.

Based on the above screen, we have to prepare a internal test plan. Based on the internal
test plan, we can prepare test cases.

Internal Test Plan

FC --> Functionality Check and we have to test the functionality of the screen.

Page 118 of 132


Software Testing Material

Y --> Have to write Test cases.


N --> Not necessary to write test case

Field Remarks Null Unique Length Number Date -ve Default


Name (type of
check)
Option FC
Type ofFC
course
Student Y N Y Y N N N
name
Address Y N Y N N N N
Phone N N Y Y N Y N
number
Date Y N Y Y Y Y N
Time FC
Timing FC
Student FC
ID
Batch FC
code
Save FC
button
Exit FC
button

We have to write test cases only for the 'Y' option, Not necessary to write test cases for the 'N'
Option,
The above internal test plan is mainly to reduce the number of test cases.

For Ex. Student name is text field type.


For that, we have unit test cases as indicated below

Unit Test case for Student name field

Sl no. Test case Expected result Actual result


UTC/001 Enter blank space and proceed Should display error
message and set focus
(Null check)
back to student name
field. ( because it
should not accept
blank)
UTC/002 Skip the field and proceed Should display error
message and set focus
(Null check)
back to student name
field.( because it
should not accept the

Page 119 of 132


Software Testing Material

null or blank space)


UTC/003 Enter name of 20 characters.Should accept and
proceed.( we assume
(length check)
20 as the maximum
limit of the student
name field)
UTC/004 Enter name of 21 characters Should display error
message. (Because
(length check)
the maximum limit is
20 characters)
UTC/005 Enter numbers '12345' in theShould display error
name field.( number check) message and set focus
back to the field.
( because text field
should not accept
numbers)

For 'Student name' field we have to write test case. The above test case is written based on
the internal test plan. Test cases is written only for 'Y'. i.e. For applicable one.
Not necessary to write Test case for 'N'. --This is to reduce the number of test cases.

Field Level Validation:


Here we have to check
1. Date range check
2. Boundary value check.

In date range check, we have to check whether the application is accepting greater than the
system date or not.

Date Range check - If we are having a Date field in a screen we have to write test case
to check the date field as

Date field.
Sl no. Test description Test case Expected result
UTC/001 Enter blank space or skip theShould display error
field ( null check) message and set focus
back to the date field.
( because. Date field
should not accept
blank space)
UTC/002 Enter date in DD/MM/YYYYShould accept and
format. (date check) proceed
UTC/003 Enter date in mm/dd/yyyyShould display error
format (date check) message. And set
focus back to the
field( because. It is of
DD/MM/YYYY

Page 120 of 132


Software Testing Material

format.
UTC/004 Enter number '1234567'Should display error
(number check) message. Because it
should not accept just
numbers.
UTC/005 Enter '-23232324' and proceedShould display error
( -ve check) message. Because it
should not accept -ve
numbers.
UTC/006 Enter date greater than the Should display error
system date. message.( because it
should not accept
more than system
date)

In boundary value check we have to check a particular field with stand in the boundaries

For e.g. If a number field has a range of 0 to 99 we have to check whether the field is
accepting -1, 0, 1 i.e. < , =, > to the lower boundary and 98, 99, 100 -- have to check with
< , = , > values of upper boundary.

User interface Check

In User Interface check, we have to check

1.Short cut keys


2.Help check
3.Tab movement check
4.Arrow key check
5.Message box check.
6.Readability of controls
7.Tool tip validations
8.Consistency with the user interface across the product.

For User Interface Check we have to write test case as

Sl no. Test case Expected result Actual result


UTC/001 Tab related checks To move across all the field
in the screen with a sequence.
UTC/002 Press the arrow keys Should move across the fields
in a sequence.
UTC/003 Press the short cutShould open the
keys (Alt + K) corresponding screen
UTC/004 Tool tip check To display the tool tip based
on the selection.
UTC/005 Screen title check Should visible to the user.
UTC/006 Dialog box contentShould be clear to the user.
check

Page 121 of 132


Software Testing Material

UTC/007 Scroll bar checks Should scroll softy.

In User Interface Check we have to check, how the application is User Friendly.

Functionality checks

Here we have to check


1.Screen functionality
2.Functionality of buttons, computation, automatic generated results
3.Field dependencies.
4.Functionality of buttons.

In functionality check, we have to check, whether we are able to

ADD or MODIFY or DELETE or VIEW and SAVE and EXIT and other main functions
in a screen.

Here we are checking whether


Combo box drop down menu is coming or not
While clicking 'save' button after entering details, checking whether it is saving or
not.
While clicking 'Exit' Button should close the current window...
Automatic result generation like, for e.g. When entering date of birth, system
should automatically generate age, based on the system date.

So we have to do this type of functional checks.

Let us see a sample test case for functionality checks.

Sl no. Test case Expected result Actual result


UTC/001 Select 'ADD' option of theShould open a new
combo box. Registration form, to
enter the new student
details
UTC/002 Select 'Delete' option of the Should delete the
combo box current student
details.
UTC/003 Select 'View' option of theShould display the
combo box. selected student
details.
UTC/004 Select 'Modify' option of theShould allow the user
combo box. to do the
modification.
UTC/005 Click 'Save' and proceed Should save the
entered details and
update in the date
UTC/006 Click 'Exit' and proceed Should close the
screen.

Page 122 of 132


Software Testing Material

INTEGRATION TESTING

Many Unit Tested Modules are combined into subsystems, which are then tested.
The goal is to see if the modules can be integrated properly. This testing activity can be
considered testing the design.

Integration Testing refers to the testing in which the software units of an application are
combined and tested for evaluating the interaction bet them.

In Integration Testing we have to check the integration between the module

Mainly we have to check

Data Dependency between the modules


Data Transfer between the modules.

Types of Approaches for Integration Testing

1.Big Bang approach


2.Top Down approach
3.Bottom Up approach.

BIG BANG APPROACH:

A Type of Integration Testing, in which software components of an application are


combined all at once into a overall system, and tested.
 According to this approach, every module is first unit tested in isolation from every
module.
After each module is tested, all of the modules are integrated together at once.

Big bang approach is called as " Non Incremental Approach"

Here all modules are combined and integrated in advance. The entire program is tested as
a whole. If Set of bugs encountered correction is difficult. If one error is corrected new bug
appears and the process continues.

Disadvantages: Tracing down of defect is not easy.

TOP DOWN APPROACH

Program is merged and tested from top to bottom.


Modules are integrated by moving downward through the control hierarchy, beginning
with the main control module.
Module, sub ordinate to the Main Control modules are incorporated into structure in
either depth first or breadth first method.

Page 123 of 132


Software Testing Material

 Here we have to create a 'Stub' - this is a dummy routine that simulates a behavior of a
subordinate.
If a particular module is not completed or not started, we can simulate this module,
just by developing a stub.

Advantage:
It is done in a an environment that closely resembles that of reality, so the tested
product is more reliable.
Stubs are functionally simpler than drivers and therefore, stub can be written with
less time and labor.

Disadvantage:
Unit testing of lower modules can be complicated by the complexity of upper
modules.

BOTTOM UP APPROACH

Begins construction & testing with atomic modules (i.e. Modules of lowest levels in
the program structure)
Program is merged and tested from bottom to top.
The terminal module is tested in isolation first, and then the next set of the higher level
modules are tested with the previously tested lower level modules.
Here we have to write ' Drivers'
 Driver is nothing more than a program, that accept the test case data, passes such data
to the module (to be tested) and prints the relevant results.

Advantage: Unit testing of each module can be done very thoroughly.

Disadvantage: Test Drivers have to be generated for modules at all levels, except for top
controlling module.

SYSTEM TESTING

Here Testing conducted on a complete, integrated system to evaluate the system's


compliance with its specified requirements.
Compete software build is made and tested to show, that all requirements are met.

TYPES OF SYSTEM TESTING

VOLUME TESTING: To find the weakness in the system with respect to its handling of
large amount of data, during short time period. ( focus is amount of data)

STRESS TESTING: The purpose of stress testing is, to test the system capacity, whether it
is handling large number of processing transactions during peak periods. (moment)

Page 124 of 132


Software Testing Material

CONCURRENCY TESTING: It is similar to Stress Testing, here we are checking the


system capacity to handle large number of processing transactions in an INSTANT.

PERFORMANCE TESTING: System performance can be accomplished in parallel with


volume and stress testing, because system performance is assessed under all conditions.

System performance is generally assessed in terms of response time and throughput rates,
under different processing and configuration condition.

REGRESSION TESTING: Is the re-execution of same subsets of test cases that have
already executed, to ensure that changes(after defect fix) have not propagated unintended side
effects.
Regression Testing is the activity that helps to ensure that changes do not introduce
unintended behavior or additional bugs.

SECURITY TESTING: Attempts to verify that protection mechanisms built into a system
will infact protect it from improper penetration.
System is protected in accordance with importance to organization, with respect to
security levels.

RECOVERY TESTING: Forcing the system to fail in different ways and checking how fast
it recovers from fail.

COMPATIBILITY TESTING: Checking whether the system is functionally consistent


across all platforms.

SERVER TESTING:
Here we have to check Volume, Stress, Performance, data recovery testing, backup and
restore testing, error trapping data security, as a whole.
Here we have to check the PAIN ( e business concept)
PAIN: P-Privacy
A- Authentication of parties
I- Integrity of transactions
N - Non repudiation.

WEB TESTING: In web testing we have to do compatibility testing, browser compatibility,


video testing (pixel- testing on font and alignment) modem speed, web security testing and
directory set up. This is a real time and highly tedious to web testing. Automated tool is a
must to do web testing.

ACCEPTANCE TESTING: Performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here focuses on the external behavior of the
system.

ALPHA TESTING: Alpha testing is conducted at the developers place, by the customer.
The software is tested in a natural setting with the developer 'looking over the shoulder'
of the user(i.e. customer) and recording errors and usage problems.
Alpha test are conducted in a controlled environment.

Page 125 of 132


Software Testing Material

BETA TESTING: Beta Testing is conducted at one or more customer sites by the end user
of the software. Here the developer is not present during testing.
Here the client tests the software or system in his place and recording defects and
sending his comments to development team.

So the above is the detailed description about the System Testing.

TEST PLAN:

A test plan is a general document for the entire project that defines the scope, approach
to be taken, and the schedules of intended testing activities. It identifies test items, the
features to be tested, the testing tasks, who will do each task and any risks requiring
contingency planning.

The test planning can be done, well before the actual testing commences and can be
done in parallel with the coding and design phase.
The inputs for forming test plan are
1.Project plan
2.Requirement specification document
3.Architecture and design document.

Requirements document and Design document are the basic documents used for selecting
the test units and deciding the approaches to be used during testing.

Test plan should contain


Test unit specifications
Features to be used
Approaches for testing
Test deliverables
Schedule
Personnel allocation

Test Unit: Test unit is a set of one or more modules together with associated data, that are
from a single computer program and that are the object of testing. Test unit may be a module
or few modules or a complete system.

Features to be tested: Include all software features and combinations of features that
should be tested. A software feature is a software characteristics specified or implied by the
requirements or design document.

Approach for Testing: specifies the overall approach to be followed in the current project.
The technique that will be used to judge the testing effort should also be specified.

Test Deliverables: Should be specified in the test plan before the actual testing begins
Deliverables could be
Test cases that were used
Detailed results of testing
Test summary report

Page 126 of 132


Software Testing Material

In general
Test case specification report
Test summary report and
Test Log report. Should be specified as deliverables.

Test summary Report: It defines the items tested, environment in which testing was done,
and any variations from the specification observed during testing.

Test Log Report: Provides chronological record of relevant details about the executions of
the test cases.

Schedule: Specifies the amount of time and effort to be spent on different activities of testing
and testing of different units that have been identified.

Personnel Allocation: Identifies the persons responsible for performing the different
activities.

Test Case Execution and Analysis:

Steps to be performed to execute the test cases are specified in a separate document
called the 'test procedure specification'. This document specifies special req. that exist for
setting the test environment and describes the methods and formats for reporting the result of
testing.

Output of the test case execution is: Test log report, Test summary report, and bug report.

Test log: Describes the details of testing

Test summary report: Gives total number of test cases executed, the number and nature of
bugs found, and summary of any metrics data.

Bug Report: Give the summary of all errors found.

DEFECT CATEGORIES

Defect Categories: Defects are mainly classified into two categories

Defect Category-I: Here in Defect Category - I is again classified in to

1. Defects from specifications: Products built varies from the product specified.
2. Defect in capturing user requirement: Variance is something that user wanted,
that is not in the built product. But was also not specified in the product.

Defect Category - II: Here defects are in 3 categories

1. Wrong: i.e. incorrect implementation


2. Missing: i.e. User requirement is not built into the product.

Page 127 of 132


Software Testing Material

3. Extra: Unwanted requirement built into the product.

Techniques to Reduce the Test Cases

Writing test cases to all possible checks is irrelevant. So we can reduce the number of
test cases by avoid some unwanted checks.

To reduce the number of test cases, there are three methods to be followed.

1. Equivalence Class Partitioning (ECP)


2. Boundary Value Analysis (BVA)
3. Cause Effect Graphing (CEG)

Equivalence Class Partitioning (ECP):

ECP is a black box testing method that divides the input domain of program into classes of
data, from which test cases can be derived. It uncovers classes of errors, there by reducing
the total number of test cases that must be developed.

Group of tests forms equivalence class if,


* They all tests the something
* If one test finds a defect, the others will
* If one test does not find a defect, the others will not.

Tests are grouped into one equivalence class when

 They affect the same output variables


 They result in similar operations in the program
 They involve the same input variables

Process of finding equivalence classes is


* Identify all inputs
* Identify all outputs
* Identify equivalence classes for each input and output
* Ensure that test cases test each input and output equivalence class at least once.

Guidelines for finding equivalence class


* Look for range numbers
* Look for membership in a group
* Look for equivalent output events
* Look for equivalent operating environment.

Boundary Value Analysis (BVA):

BVA is a test case design technique that complements equivalence 'partitioning'.


BVA leads to selection of test cases that exercises bounding values.

Rather than selecting any elements of equivalence, BVA leads to the selection of test case
at the 'edges' of the class.

Page 128 of 132


Software Testing Material

Guidelines for BVA:


1. If input condition is a range bounded by values 'a' and 'b'. Test case should be
designed with values 'a' and 'b', just above and just below a & b.

2. If input condition specifies a number of values, test case should be developed that
exercises the minimum and maximum numbers. Values just above and just below the
maximum and minimum should be tested.

Apply the above guidelines for output conditions also.

SOME IMPORTANT TESTING HINTS

Testing is the phase where the errors remaining from all the previous phases (i.e. SDLC)
must be detected. Hence testing performs a very critical role for quality assurance and for
ensuring the reliability of software.

Success of testing in revealing errors depends critically on test cases.

What is the Difference between Error, Fault Failure and Bug ?

Error: It refers to the discrepancy between computed or measured value and theoretically
correct value. i.e. Difference between actual output and correct output of the software.

Fault: Fault is the basic reason for software malfunction. i.e. Fault is a condition that
causes a system to fail in performing its required function.

Failure: Is the inability of the system or component to perform a required function


according to its specifications. A Software failure occurs if the behavior of the software is
different from the specified behavior.

Bug: Non Functionality to a functionality

Presence of an error implies that a failure must have occurred, and the observance of a
failure implies that a fault must be present in the system.

During the testing process only failures are observed by which presence of fault is
deduced. The actual faults are identified by separate activities commonly referred to us
'debugging'.

In other words, for identifying faults after testing has revealed the presence of faults the
expensive task of debugging has to be performed. This is the reason 'why testing is
expensive.'

Reason for Testing System separately( Unit, Integration and System Testing):

Reason for testing parts separately is that if a test case detects an error in a large
program, it will be extremely difficult to pin point the source of error.

It is difficult to construct test cases so that all the modules will be executed. This may
increase the change of module's error undetected.

Page 129 of 132


Software Testing Material

What is the need for independent testing/ third party testing:?

 Sometimes error occurs because the programmer did not understand the
specification clearly. Testing of a program by its programmer will not detect
such errors, but independent testing may succeed in finding them.
 Time concern
 If the customer want the third party testing
 Non-availability of testing resources
 It is not easy for some one to test their own program with proper frame of
mind for testing

What is the Testing Principles?


1. All the test cases should be traceable to the customer requirements.
2. Testing should be planned long before testing begins.
3. Testing should begin 'in the small' and process towards testing 'in the large'
4. To be most effective, testing should be conducted by an independent third party

What is the life time of a bug?


Once you find the defect, time spent to fix the defect is called life time of the bug.

Attributes of a Good test:


1. A good test has a high probability of finding an error
2. A good test is not redundant.
3. A good test should be 'best of breed'
4. A good test should be neither too simple nor too complex.

Why Software has bugs?


Due to
1. Software complexity
2. Programming errors
3. Changing in requirement
4. Poorly documented code
5. Miscommunication between the inter group
6. Software development tools or OS may introduce their own bugs.

When to Stop Testing?


We can Stop Testing when

 Full execution of all test cases with internal acceptance and customer acceptance
 When Beta or Alpha Testing period ends
 Bug rate falls below certain level
 Test budget depleted
 Test cases completed with certain % passed

What is Error Seeding?

Page 130 of 132


Software Testing Material

Once the software is 100% bug free. Just to check the efficiency of Tester, we have to
'insert certain number of bugs' in project in various points and give it to tester to test.

Efficient tester will find the 'inserted bugs'.


Error seeding is just to check the efficiency of the tester.

We have to check the efficiency of the tester once the software is 100% bug free.

DEFECT CLASSIFICATION

As per ANSI/IEEE standard 729 the following are the five level of defect classification are

1. Critical: The defect results in the failure of the complete software system, of a subsystem,
or of a software unit (program or module) with the system.

2. Major: The defect results in the failure of the complete software system of a subsystem, or
of a software unit (program or module) within the system. There is no way to make the failed
components, however, there are acceptable processing alternatives which will yield the
desired result.

3. Average: The defect does not result in a failure, but causes the system to produce
incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability.

4. Minor: The defect does not cause failure, does not impair usability, and the desired
processing results are easily obtained by working around the defect.

5. Cosmetic: The defect is the result of non-conformance to a standard, is related to the


aesthetics of the system, or is a request for an enhancement. Defects at this level may be
deferred or even ignored.

In addition to the defect severity level defined above, defect priority level can be used
with severity categories to determine the immediacy of repair. A five repair priority scale has
also be used in common testing practice.

The levels are:

Resolve Immediately: Further development and /or testing cannot occur until the defect
has been repaired. The system cannot be used until the repair has been effected

Give High Attention: The defect must be resolved as soon as possible because it is
impairing development / and or testing activities. System use will be severely affected
until the defect is fixed.
Normal Queue: The defect should be resolved in the normal course of development
activities. It can wait unit a new build or version is created.

Low Priority: The defect is an irritant that should be repaired but which can be repaired
after more serious defect have been fixed

Defer: The defect repair can be put of indefinitely. It can be resolved in a future major
system revision or not resolved at all.

Page 131 of 132


Software Testing Material

Total effort spent in testing


Cost of a defect == ------------------------
Tot. no. of defect.

No. of Test cases


Testing efficiency == -----------------
No. of defects.

Defect Closure rate == how much time takes to close the defect

No. of defect
Defect Density == --------------
KLOC/FP
KLOC- Kilo Lines Of Code
FP - Functional Point analysis.

Software Testing Related Web Sites:

www.softwareqatest.com
www.rstcorp.com
www.mmsindia.com
www.facilita.co.uk
www.autotestco.com
www.kaner.com
www.badsoftware.com
www.model-based-testing.com
www.soft.com
www.jrothman.com
www.webservepro.com
www.testworks.com
www.ftech.com
www.geocities.com
www.aptest.com
www.testing.com
www.stqemagazine.com
www.sqe.com
www.io.com
www.testingstuff.com
www.stickyminds.com

Page 132 of 132

You might also like