You are on page 1of 69

SOFTWARE QUALITY Fundamentals

DEFINITION
Software quality is the degree of conformance to explicit or implicit
requirements and expectations.
Explanation:

Explicit: clearly defined and documented

Implicit: not clearly defined and documented but indirectly suggested

Requirements: business/product/software requirements

Expectations: mainly end-user expectations

Note: Some people tend to accept quality as compliance to only explicit


requirements and not implicit requirements. We tend to think of such people
as lazy.
Definition by IEEE

The degree to which a system, component, or process meets specified


requirements.

The degree to which a system, component, or process meets customer


or user needs or expectations.

Definition by ISTQB

quality: The degree to which a component, system or process meets


specified requirements and/or user/customer needs and expectations.

software quality: The totality of functionality and features of a


software product that bear on its ability to satisfy stated or implied
needs.

As with any definition, the definition of software quality is also varied and
debatable. Some even say that quality cannot be defined and some say
that it can be defined but only in a particular context. Some even state
confidently that quality is lack of bugs. Whatever the definition, it is true
that quality is something we all aspire to.
Software quality has many dimensions.
In order to ensure software quality, we undertake Software Quality
Assurance and Software Quality Control.

SOFTWARE QUALITY DIMENSIONS


Software Quality has many dimensions and below are some of them:
Accessibility: The degree to which software can be used comfortably by a wide
variety of people, including those who require assistive technologies like screen
magnifiers or voice recognition.
Compatibility: The suitability of software for use in different environments like
different Operating Systems, Browsers, etc.
Concurrency: The ability of software to service multiple requests to the same
resources at the same time.
Efficiency: The ability of software to perform well or achieve a result without wasted
energy, resources, effort, time or money.
Functionality: The ability of software to carry out the functions as specified or
desired.
Installability: The ability of software to be installed in a specified environment.
Localizability: The ability of software to be used in different languages, time zones
etc.
Maintainability: The ease with which software can be modified (adding features,
enhancing features, fixing bugs, etc)
Performance: The speed at which software performs under a particular load.
Portability: The ability of software to be transferred easily from one location to
another.
Reliability: The ability of software to perform a required function under stated
conditions for stated period of time without any errors.
Scalability: The measure of softwares ability to increase or decrease in performance
in response to changes in softwares processing demands.
Security: The extent of protection of software against unauthorized access, invasion
of privacy, theft, loss of data, etc.
Testability: The ability of software to be easily tested.

Usability: The degree of softwares ease of use.


When someone says This software is of a very high quality., you might want to ask In
which dimension of quality?
Software Quality Assurance (SQA)
is a set of activities for ensuring quality in software engineering processes (that ultimately
result in quality in software products).
It includes the following activities:
Process definition and implementation
Auditing
Training
Processes could be:
Software Development Methodology
Project Management
Configuration Management
Requirements Development/Management
Estimation
Software Design
Testing
etc
Once the processes have been defined and implemented, Quality Assurance has the
following responsibilities:
identify weaknesses in the processes
correct those weaknesses to continually improve the process
The quality management system under which the software system is created is normally
based on one or more of the following models/standards:

CMMI
Six Sigma
ISO 9000
Note: There are many other models/standards for quality management but the ones
mentioned above are the most popular.
Software Quality Assurance encompasses the entire software development life cycle and
the goal is to ensure that the development and/or maintenance processes are continuously
improved to produce products that meet specifications/requirements.
The process of Software Quality Control (SQC) is also governed by Software Quality
Assurance (SQA).
SQA is generally shortened to just QA.
SOFTWARE QUALITY CONTROL Fundamentals
Software Quality Control (SQC) is a set of activities for ensuring quality in software
products.
It includes the following activities:
Reviews
o Requirement Review
o Design Review
o Code Review
o Deployment Plan Review
o Test Plan Review
o Test Cases Review
Testing
o Unit Testing
o Integration Testing
o System Testing

o Acceptance Testing
Software Quality Control is limited to the Review/Testing phases of the Software
Development Life Cycle and the goal is to ensure that the products meet
specifications/requirements.
The process of Software Quality Control (SQC) is governed by Software Quality
Assurance (SQA). While SQA is oriented towards prevention, SQC is oriented towards
detection. Read Differences between Software Quality Assurance and Software Quality
Control.
Some people assume that QC means just Testing and fail to consider Reviews; this should
be discouraged.
Differences between Software Quality Assurance (SQA) and Software Quality Control
(SQC):
Many people still use the term Quality Assurance (QA) and Quality Control (QC)
interchangeably but this should be discouraged.
Criteria

Software Quality Assurance (SQA)

Software Quality Control (SQC)

SQA is a set of activities for ensuring quality in SQC is a set of activities for
software engineering processes (that ultimately ensuring quality in software
Definition
result in quality in software products). The products. The activities focus on
activities establish and evaluate the processes identifying defects in the actual
that produce products.
products produced.
Focus

Process focused

Product focused

Orientatio
n

Prevention oriented

Detection oriented

Breadth

Organization wide

Product/project specific

Scope

Relates to all products that will ever be created


by a process

Relates to specific product

Activities

Process Definition and Implementation

Reviews

Audits
Training

Testing

SOFTWARE DEVELOPMENT LIFE CYCLE [SDLC]


Software Development Life Cycle, or Software Development Process, defines the steps/
stages/ phases in the building of software.
There are various kinds of software development models like:
Waterfall model
Spiral model
Iterative and incremental development (like Unified Process and Rational Unified
Process)
Agile development (like Extreme Programming and Scrum)
Models are evolving with time and the development life cycle can vary significantly from one
model to the other. It is beyond the scope of this particular article to discuss each model.
However, each model comprises of all or some of the following phases/ activities/ tasks.
SDLC IN SUMMARY
Project Planning
Requirements Development
Estimation
Scheduling
Design
Coding
Test Build/Deployment
Unit Testing
Integration Testing

User Documentation
System Testing
Acceptance Testing
Production Build/Deployment
Release
Maintenance
SDLC IN DETAIL
Project Planning
o Prepare
o Review
o Rework
o Baseline
o Revise [if necessary] >> Review >> Rework >> Baseline
Requirements Development [Business Requirements and Software/Product
Requirements]
o Develop
o Review
o Rework
o Baseline
o Revise [if necessary] >> Review >> Rework >> Baseline
Estimation [Size / Effort / Cost]
o <same as the activities/tasks mentioned for Project Planning>
Scheduling
o <same as the activities/tasks mentioned for Project Planning>

Designing [ High Level Design and Detail Design]


o <same as the activities/tasks mentioned for Requirements Development>
Coding
o Code
o Review
o Rework
o Commit
o Recode [if necessary] >> Review >> Rework >> Commit
Test Builds Preparation/Deployment
o Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
o Build/Deploy
Unit Testing
o Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
o Test Cases/Scripts

Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Integration Testing
o <same as the activities/tasks mentioned for unit testing>
User Documentation
o Prepare
o Review
o Rework
o Baseline
o Revise [if necessary] >> Review >> Rework >> Baseline
System Testing
o <same as the activities/tasks mentioned for Unit Testing>
Acceptance Testing[ Internal Acceptance Test and External Acceptance Test]
o <same as the activities/tasks mentioned for Unit Testing>
Production Build/Deployment
o <same as the activities/tasks mentioned for Test Build/Deployment>
Release
o Prepare
o Review
o Rework

o Release
Maintenance
o Recode [Enhance software / Fix bugs]
o Retest
o Redeploy
o Rerelease
Notes:
The life cycle mentioned here is NOT set in stone and each phase does not
necessarily have to be implemented in the order mentioned.
Though SDLC uses the term Development, it does not focus just on the coding
tasks done by developers but incorporates the tasks of all stakeholders, including
testers.
There may still be many other activities/ tasks which have not been specifically mentioned
above, like Configuration Management. No matter what, it is essential that you clearly
understand the software development life cycle your project is following. One issue that is
widespread in many projects is that software testers are involved much later in the life cycle,
due to which they lack visibility and authority (which ultimately compromises software
quality).
DEFINITION OF TEST
Being in the software industry, we have to encounter the word TEST many times. Though
we have our own specific meaning of the word TEST, we have collected here some
definitions of the word as provided by various dictionaries and other tidbits. The word TEST
can be a Noun, a Verb or an Adjective but the definitions here are only of the Noun form.
DEFINITION
Google Dictionary:
A Test is a deliberate action or experiment to find out how well something works.
Cambridge Advanced Learners Dictionary:
A Test is an act of using something to find out whether it is working correctly or how effective
it is.

The Free Dictionary by Farlex:


A Test is a procedure for critical evaluation; a means of determining the presence, quality, or
truth of something.
Meriam Webster:
A Test is a critical examination, observation, or evaluation.
Dictionary.com:
A Test is the means by which the presence, quality, or genuineness of anything is
determined.
WordWeb:
A Test is trying something to find out about it.
Longman Dictionary of Contemporary English:
A Test is a process used to discover whether equipment or a product works correctly, or to
discover more about it.
ETYMOLOGY
Online Etymology Dictionary:
TEST late 14c., small vessel used in assaying precious metals, from O.Fr. test, from L.
testum earthen pot, related to testa piece of burned clay, earthen pot, shell (cf. L. testudo
tortoise) and textere to weave (cf. Lith. tistas vessel made of willow twigs; see texture).
Sense of trial or examination to determine the correctness of something is recorded from
1594. The verb in this sense is from 1748. The connecting notion is ascertaining the quality
of a metal by melting it in a pot.
SYNONYMS

If the word TEST has been nauseating you because of its being overused, try the following
synonyms:
Analysis
Assessment
Attempt
Check
Confirmation
Evaluation

Examination
Experiment
Inquiry
Inspection
Investigation
Scrutiny
Trial
Verification
METHODOLOGIES of Software Testing
Below are some methods / techniques of software testing:

Black Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is not known to the
tester. These tests can be functional or non-functional, though usually functional.
Test design techniques include: Equivalence partitioning, Boundary Value
Analysis, Cause Effect Graphing.

White Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is known to the tester.
Test design techniques include: Control flow testing, Data flow testing, Branch
testing, Path testing.

Gray Box Testing is a software testing method which is a combination of Black


Box Testing method and White Box Testing method.

Agile Testing is a method of software testing that follows the principles of agile
software development.

Ad Hoc Testing is a method of software testing without any planning and


documentation.

These methods can be used in various software testing levels and types.

BLACK BOX TESTING Fundamentals


DEFINITION
Black Box Testing, also known as Behavioral Testing, is a software testing method in
which the internal structure/ design/ implementation of the item being tested is not known to
the tester. These tests can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a
black box; inside which one cannot see. This method attempts to find errors in the following
categories:
Incorrect or missing functions
Interface errors
Errors in data structures or external database access
Behavior or performance errors
Initialization and termination errors
Definition by ISTQB
black box testing: Testing, either functional or non-functional, without reference to
the
internal structure of the component or system.
black box test design technique: Procedure to derive and/or select test cases
based on an
analysis of the specification, either functional or non-functional, of a component or
system
without reference to its internal structure.
EXAMPLE

A tester, without knowledge of the internal structures of a website, tests the web pages by
using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the
expected outcome.
LEVELS APPLICABLE TO
Black Box Testing method is applicable to the following levels of software testing:
Integration Testing
System Testing
Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black box
testing method comes into use.
BLACK BOX TESTING TECHNIQUES
Following are some techniques that can be used for designing black box tests.
Equivalence partitioning: It is a software test design technique that involves dividing
input values into valid and invalid partitions and selecting representative values from
each partition as test data.
Boundary Value Analysis: It is a software test design technique that involves
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of the boundaries as test data.
Cause Effect Graphing: It is a software test design technique that involves identifying
the cases (input conditions) and effects (output conditions), producing a CauseEffect Graph, and generating test cases accordingly.
BLACK BOX TESTING ADVANTAGES
Tests are done from a users point of view and will help in exposing discrepancies in
the specifications.
Tester need not know programming languages or how the software has been
implemented.
Tests can be conducted by a body independent from the developers, allowing for an
objective perspective and the avoidance of developer-bias.

Test cases can be designed as soon as the specifications are complete.


BLACK BOX TESTING DISADVANTAGES
Only a small number of possible inputs can be tested and many program paths will
be left untested.
Without clear specifications, which is the situation in many projects, test cases will be
difficult to design.
Tests can be redundant if the software designer/ developer has already run a test
case.
Ever wondered why a soothsayer closes the eyes when foretelling events? So is
almost the case in Black Box Testing.
Black Box Testing is contrasted with White Box Testing. Read Differences between Black
Box Testing and White Box Testing.
WHITE BOX TESTING Fundamentals
DEFINITION
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box
Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a software
testing method in which the internal structure/ design/ implementation of the item being
tested is known to the tester. The tester chooses inputs to exercise paths through the code
and determines the appropriate outputs. Programming know-how and the implementation
knowledge is essential. White box testing is testing beyond the user interface and into the
nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a
white/ transparent box; inside which one clearly sees.
Definition by ISTQB
white-box testing: Testing based on an analysis of the internal structure of the
component or
system.
white-box test design technique: Procedure to derive and/or select test cases
based on an
analysis of the internal structure of a component or system.

EXAMPLE
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs
against the expected outcomes, which is also determined by studying the implementation
code.
White Box Testing is like the work of a mechanic who examines the engine to see why the
car is not moving.
LEVELS APPLICABLE TO
White Box Testing method is applicable to the following levels of software testing:
Unit Testing: For testing paths within a unit.
Integration Testing: For testing paths between units.
System Testing: For testing paths between subsystems.
However, it is mainly applied to Unit Testing.
WHITE BOX TESTING ADVANTAGES
Testing can be commenced at an earlier stage. One need not wait for the GUI to be
available.
Testing is more thorough, with the possibility of covering most paths.
WHITE BOX TESTING DISADVANTAGES
Since tests can be very complex, highly skilled resources are required, with thorough
knowledge of programming and implementation.
Test script maintenance can be a burden if the implementation changes too
frequently.
Since this method of testing it closely tied with the application being testing, tools to
cater to every kind of implementation/platform may not be readily available.
White Box Testing is contrasted with Black Box Testing. Read the Differences between
Black Box Testing and White Box Testing.
GRAY BOX TESTING Fundamentals
DEFINITION

Gray Box Testing is a software testing method which is a combination of Black Box
Testing method andWhite Box Testing method. In Black Box Testing, the internal structure of
the item being tested is unknown to the tester and in White Box Testing the internal
structure in known. In Gray Box Testing, the internal structure is partially known. This
involves having access to internal data structures and algorithms for purposes of designing
the test cases, but testing at the user, or black-box level.
Gray Box Testing is named so because the software program, in the eyes of the tester is
like a gray/ semi-transparent box; inside which one can partially see.
EXAMPLE
An example of Gray Box Testing would be when the codes for two units/ modules are
studied (White Box Testing method) for designing test cases and actual tests are conducted
using the exposed interfaces (Black Box Testing method).
LEVELS APPLICABLE TO
Though Gray Box Testing method may be used in other levels of testing, it is primarily useful
inIntegration Testing.
SPELLING
Note that Gray is also spelt as Grey. Hence Grey Box Testing and Gray Box Testing mean
the same.
AGILE TESTING Fundamentals
This article on Agile Testing assumes that you already understand Agile software
development methodology (Scrum, Extreme Programming, or other flavors of Agile). Also, it
discusses the idea at a high level and does not give you the specifics.
VERY SHORT DEFINITION
Agile Testing is a method of software testing that follows the principles of agile software
development.

MANIFESTO FOR AGILE SOFTWARE TESTING


This is adapted from agilemanifesto.org and it might look a little silly to copy everything from
there and just replace the term development with testing but here it is for your refreshment.
You need to however realize that the term development means coding, testing and all other
activities that are necessary in building a valuable software.
We are uncovering better ways of testing
software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more.
AGILE TESTING VALUES EXPLAINED
Individuals and interactions over processes and tools: This means that flexible
people and communication are valued over rigid processes and tools. However, this

does not mean that agile testing ignores processes and tools. In fact, agile testing is
built upon very simple, strong and reasonable processes like the process of
conducting the daily meeting or preparing the daily build. Similarly, agile testing
attempts to leverage tools, especially for test automation, as much as possible.
Nevertheless, it needs to be clearly understood that it is the testers who drive those
tools and the output of the tools depend on the testers (not the other way round).
Working software over comprehensive documentation: This means that
functional and usable software is valued over comprehensive but unusable
documentation. Though this is more directed to upfront requirement specifications
and design specifications, this can be true for test plans and test cases as well. Our
primary goal is the act of testing itself and not any elaborate documentation merely
pointing toward that goal. However, it is always best to have necessary
documentation in place so that the picture is clear and the picture remains with the
team if/ when a member leaves.
Customer collaboration over contract negotiation: This means that the client is
engaged frequently and closely in touch with the progress of the project (not through
complicated progress reports but through working pieces of software). This does put
some extra burden on the customer who has to collaborate with the team at regular
intervals (instead of just waiting till the end of the contract, hoping that deliveries will
be made as promised). But this frequent engagement ensures that the project is
heading toward the right direction and not toward the building of a frog when a fish is
expected.
Responding to change over following a plan: This means accepting changes as
being natural and responding to them without being afraid of them. It is always nice
to have a plan beforehand but it is not very nice to stick to a plan, at whatever the
cost, even when situations have changed. Lets say you write a test case, which is
your plan, assuming a certain requirement. Now, if the requirement changes, you do
not lament over the wastage of your time and effort. Instead, you promptly adjust
your test case to validate the changed requirement. And, of course, only a FOOL
would try to run the same old test case on the new software and mark the test as
FAIL.
PRINCIPLES BEHIND AGILE MANIFESTO
Behind the Agile Manifesto are the following principles which some agile practitioners
unfortunately fail to understand or implement. We urge you to go through each principle and

digest them thoroughly if you intend to embrace Agile Testing. On the right column, the
original principles have been re-written specifically for software testers.
We follow these principles:

What it means for Software Testers:

Our highest priority is to satisfy the


Our highest priority is to satisfy the customer
customer through early and continuous through early and continuous delivery of highdelivery of valuable software.
quality software.
Welcome changing requirements, even
Welcome changing requirements, even late
late in development. Agile processes
in testing. Agile processes harness change for the
harness change for the customers
customers competitive advantage.
competitive advantage.
Deliver working software frequently, from
Deliver high-quality software frequently, from a
a couple of weeks to a couple of months,
couple of weeks to a couple of months, with a
with a preference to the shorter
preference to the shorter timescale.
timescale.
Business people and developers must
work together daily throughout the
project.

Business people, developers, and testers must


work together daily throughout the project.

Build projects around motivated


Build test projects around motivated individuals.
individuals. Give them the environment
Give them the environment and support they
and support they need, and trust them to
need, and trust them to get the job done.
get the job done.
The most efficient and effective method
The most efficient and effective method of
of conveying information to and within a
conveying information to and within a test team is
development team is face-to-face
face-to-face conversation.
conversation.
Working software is the primary measure Working high-quality software is the primary
of progress.
measure of progress.
Agile processes promote sustainable
development. The sponsors, developers,
and users should be able to maintain a
constant pace indefinitely.

Agile processes promote sustainable


development andtesting. The sponsors,
developers, testers, and users should be able to
maintain a constant pace indefinitely.

Continuous attention to technical


excellence and good design enhances
agility.

Continuous attention to technical excellence and


goodtest design enhances agility.

Simplicitythe art of maximizing the


amount of work not doneis essential.

Simplicitythe art of maximizing the amount of


work not doneis essential.

The best architectures, requirements,


The best architectures, requirements, and
and designs emerge from self-organizing
designs emerge from self-organizing teams.
teams.
At regular intervals, the team reflects on
At regular intervals, the test team reflects on how
how to become more effective, then
to become more effective, then tunes and adjusts
tunes and adjusts its behavior
its behavior accordingly.
accordingly.
THE RECURRING QUESTION
So what happens to all the traditional software testing methods, types and artifacts? Do we
throw them away?
THE ANSWER
Naaah! You will still need all those software testing methods, types and artifacts (but at
varying degrees of priority and necessity). You will, however, need to completely throw away
that traditional attitude and embrace the agile attitude.
TEST CASE Fundamentals
DEFINITION
A test case is a set of conditions or variables under which a tester will determine whether a
system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the requirements or
design of an application.
TEST CASE TEMPLATE
A test case can have the following elements. Note, however, that normally a test
management tool is used by companies and the format is determined by the tool used.
Test Suite ID

The ID of the test suite to which this test case belongs.

Test Case ID

The ID of the test case.

Test Case
Summary

The summary / objective of the test case.

Related
Requirement

The ID of the requirement this test case relates/traces to.

Prerequisites

Any prerequisites or preconditions that must be fulfilled prior to executing


the test.

Test Procedure Step-by-step procedure to execute the test.


Test Data

The test data, or links to the test data, that are to be used while
conducting the test.

Expected Result The expected result of the test.


Actual Result

The actual result of the test; to be filled after executing the test.

Status

Pass or Fail. Other statuses can be Not Executed if testing is not


performed and Blocked if testing is blocked.

Remarks

Any comments on the test case or test execution.

Created By

The name of the author of the test case.

Date of Creation The date of creation of the test case.


Executed By

The name of the person who executed the test.

Date of
Execution

The date of execution of the test.

Test Environment

The environment (Hardware/Software/Network) in which the test was


executed.

TEST CASE EXAMPLE / TEST CASE SAMPLE


Test Suite ID

TS001

Test Case ID

TC001

Test Case

To verify that clicking the Generate Coin button generates coins.

Summary
Related
Requirement
Prerequisites

RS001
1. User is authorized.
2. Coin balance is available.
1. Select the coin denomination in the Denomination field.

Test Procedure

2. Enter the number of coins in the Quantity field.


3. Click Generate Coin.

Test Data

Expected Result

1. Denominations: 0.05, 0.10, 0.25, 0.50, 1, 2, 5


2. Quantities: 0, 1, 5, 10, 20
1. Coin of the specified denomination should be produced if the
specified Quantity is valid (1, 5)
2. A message Please enter a valid quantity between 1 and 10 should
be displayed if the specified quantity is invalid.
1. If the specified quantity is valid, the result is as expected.

Actual Result

2. If the specified quantity is invalid, nothing happens; the expected


message is not displayed

Status

Fail

Remarks

This is a sample test case.

Created By

John Doe

Date of Creation 01/14/2020


Executed By

Jane Roe

Date of
Execution

02/16/2020

Test
Environment

OS: Windows Y

Browser: Chrome N
WRITING GOOD TEST CASES
As far as possible, write test cases in such a way that you test only one thing at a
time. Do not overlap or complicate test cases. Attempt to make your test cases
atomic.
Ensure that all positive scenarios and negative scenarios are covered.
Language:
o Write in simple and easy to understand language.
o Use active voice: Do this, do that.
o Use exact and consistent names (of forms, fields, etc).
Characteristics of a good test case:
o Accurate: Exacts the purpose.
o Economical: No unnecessary steps or words.
o Traceable: Capable of being traced to requirements.
o Repeatable: Can be used to perform the test over and over.
o Reusable: Can be reused if necessary.

TEST SCRIPT Fundamentals


A Test Script is a set of instructions (written using a scripting/programming language)
that is performed on a system under test to verify that the system performs as
expected. Test scripts are used in automated testing.
Sometimes, a set of instructions (written in a human language), used in manual
testing, is also called a Test Script but a better term for that would be a Test Case.
Some scripting languages used in automated testing are:

JavaScript

Perl

Python

Ruby

Tcl

Unix Shell Script

VBScript

There are also many Test Automation Tools/Frameworks that generate the test scripts
for you; without the need for actual coding. Many of these tools have their own
scripting languages (some of them based on a core scripting languages). For example,
Sikuli, a GUI automation tool, uses Sikuli Script which is based on Python. A test script
can be as simple as the one below:
def sample_test_script (self):
type ("TextA")
click (ImageButtonA)
assertExist (ImageResultA)

A test execution engine and a repository of test scripts (along with test data) are
collectively known as a Test Harness.

SOFTWARE TESTING MYTHS and FACTS


Just as every field has its myths, so does the field of Software Testing. Software testing
myths have arisen primarily due to the following:
Lack of authoritative facts.
Evolving nature of the industry.
General flaws in human logic.
Some of the myths are explained below, along with their related facts:
1. MYTH: Quality Control = Testing.
o FACT: Testing is just one component of software quality control. Quality
Control includes other activities such as Reviews.

2. MYTH: The objective of Testing is to ensure a 100% defect- free product.


o FACT: The objective of testing is to uncover as many defects as possible
while ensuring that the software meets the requirements. Identifying and
getting rid of all defects is impossible.
3. MYTH: Testing is easy.
o FACT: Testing can be difficult and challenging (sometimes, even more so
than coding).
4. MYTH: Anyone can test.
o FACT: Testing is a rigorous discipline and requires many kinds of skills.
5. MYTH: There is no creativity in testing.
o FACT: Creativity can be applied when formulating test approaches, when
designing tests, and even when executing tests.
6. MYTH: Automated testing eliminates the need for manual testing.
o FACT: 100% test automation cannot be achieved. Manual Testing, to some
level, is always necessary.
7. MYTH: When a defect slips, it is the fault of the Testers.
o FACT: Quality is the responsibility of all members/ stakeholders, including
developers, of a project.
8. MYTH: Software Testing does not offer opportunities for career growth.
o FACT: Gone are the days when users had to accept whatever product was
dished to them; no matter what the quality. With the abundance of competing
software and increasingly demanding users, the need for software testers to
ensure high quality will continue to grow.Software testing jobs are hot now.
Management folk tend to have a special affinity for myths and it will be your responsibility as
a software tester to convince them that they are wrong. Put forward your arguments with
relevant examples and numbers to validate your claims.
DEFECT PROBABILITY Fundamentals
Defect Probability (Defect Visibility or Bug Probability or Bug Visibility) indicates the
likelihood of a user encountering the defect / bug.

High: Encountered by all or almost all the users of the feature


Medium: Encountered by about 50% of the users of the feature
Low: Encountered by very few or no users of the feature
Defect Probability can also be denoted in percentage (%).
The measure of Probability/Visibility is with respect to the usage of a feature and not the
overall software. Hence, a bug in a rarely used feature can have a high probability if the bug
is easily encountered by users of the feature. Similarly, a bug in a widely used feature can
have a low probability if the users rarely detect it.
DEFECT REPORT Fundamentals
After uncovering a defect (bug), testers generate a formal defect report. The purpose of a
defect report is to state the problem as clearly as possible so that developers can replicate
the defect easily and fix it.
DEFECT REPORT TEMPLATE
In most companies, a defect reporting tool is used and the elements of a report can vary.
However, in general, a defect report can consist of the following elements.
ID

Unique identifier given to the defect. (Usually Automated)

Project

Project name.

Product

Product name.

Release Version

Release version of the product. (e.g. 1.2.3)

Module

Specific module of the product where the defect was detected.

Detected Build
Version

Build version of the product where the defect was detected (e.g.
1.2.3.5)

Summary

Summary of the defect. Keep this clear and concise.

Description

Detailed description of the defect. Describe as much as possible but


without repeating anything or using complex words. Keep it simple
but comprehensive.

Steps to Replicate

Step by step description of the way to reproduce the defect. Number

the steps.
Actual Result

The actual result you received when you followed the steps.

Expected Results

The expected results.

Attachments

Attach any additional information like screenshots and logs.

Remarks

Any additional comments on the defect.

Defect Severity

Severity of the Defect. (See Defect Severity)

Defect Priority

Priority of the Defect. (See Defect Priority)

Reported By

The name of the person who reported the defect.

Assigned To

The name of the person that is assigned to analyze/fix the defect.

Status

The status of the defect. (See Defect Life Cycle)

Fixed Build Version Build version of the product where the defect was fixed (e.g. 1.2.3.9)
REPORTING DEFECTS EFFECTIVELY
It is essential that you report defects effectively so that time and effort is not unnecessarily
wasted in trying to understand and reproduce the defect. Here are some guidelines:
Be specific:
o Specify the exact action: Do not say something like Select ButtonB. Do you
mean Click ButtonB or Press ALT+B or Focus on ButtonB and click
ENTER? Of course, if the defect can be arrived at by using all the three
ways, its okay to use a generic term as Select but bear in mind that you
might just get the fix for the Click ButtonB scenario. [Note: This might be a
highly unlikely example but it is hoped that the message is clear.]
o In case of multiple paths, mention the exact path you followed: Do not say
something like If you do A and X or B and Y or C and Z, you get D.
Understanding all the paths at once will be difficult. Instead, say Do A and X
and you get D. You can, of course, mention elsewhere in the report that D
can also be got if you do B and Y or C and Z.

o Do not use vague pronouns: Do not say something like In ApplicationA, open
X, Y, and Z, and then close it. What does the it stand for? Z or, Y, or X or
ApplicationA?
Be detailed:
o Provide more information (not less). In other words, do not be lazy.
Developers may or may not use all the information you provide but they sure
do not want to beg you for any information you have missed.
Be objective:
o Do not make subjective statements like This is a lousy application or You
fixed it real bad.
o Stick to the facts and avoid the emotions.
Reproduce the defect:
o Do not be impatient and file a defect report as soon as you uncover a defect.
Replicate it at least once more to be sure. (If you cannot replicate it again, try
recalling the exact test condition and keep trying. However, if you cannot
replicate it again after many trials, finally submit the report for further
investigation, stating that you are unable to reproduce the defect anymore
and providing any evidence of the defect if you had gathered. )
Review the report:
o Do not hit Submit as soon as you write the report. Review it at least once.
Remove any typos.
Software Defect / Bug: Definition, Explanation, Classification, Details:
DEFINITION
A Software Defect / Bug is a condition in a software product which does not meet a software
requirement (as stated in the requirement specifications) or end-user expectations (which
may not be specified but are reasonable). In other words, a defect is an error in coding or
logic that causes a program to malfunction or to produce incorrect/unexpected results.
A program that contains a large number of bugs is said to be buggy.
Reports detailing bugs in software are known as bug reports. (See Defect Report)

Applications for tracking bugs are known as bug tracking tools.


The process of finding the cause of bugs is known as debugging.
The process of intentionally injecting bugs in a software program, to estimate test
coverage by monitoring the detection of those bugs, is known as bebugging.

Software Testing proves that defects exist but NOT that defects do not exist.
CLASSIFICATION
Software Defects/ Bugs are normally classified as per:
Severity / Impact (See Defect Severity)
Probability / Visibility (See Defect Probability)
Priority / Urgency (See Defect Priority)
Related Dimension of Quality (See Dimensions of Quality)
Related Module / Component
Phase Detected
Phase Injected
Related Module /Component
Related Module / Component indicates the module or component of the software where the
defect was detected. This provides information on which module / component is buggy or
risky.
Module/Component A

Module/Component B
Module/Component C

Phase Detected
Phase Detected indicates the phase in the software development lifecycle where the defect
was identified.
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Phase Injected
Phase Injected indicates the phase in the software development lifecycle where the bug
was introduced. Phase Injected is always earlier in the software development lifecycle than
the Phase Detected. Phase Injected can be known only after a proper root-cause analysis
of the bug.
Requirements Development
High Level Design
Detailed Design
Coding
Build/Deployment
Note that the categorizations above are just guidelines and it is up to the
project/organization to decide on what kind of categorization to use. In most cases, the
categorization depends on the defect tracking tool that is being used. It is essential that
project members agree beforehand on the categorization (and the meaning of each
categorization) so as to avoid arguments, conflicts, and unhealthy bickering later.
NOTE: We prefer the term Defect over the term Bug because Defect is more
comprehensive.

The Differences Between Black Box Testing and White Box Testing are listed below.
Criteria

Black Box Testing

White Box Testing

Definition

Black Box Testing is a software testing


method in which the internal structure/
design/ implementation of the item
being tested is NOT known to the
tester

White Box Testing is a software


testing method in which the internal
structure/ design/ implementation
of the item being tested is known to
the tester.

Mainly applicable to higher levels of


testing:Acceptance Testing

Mainly applicable to lower levels of


testing:Unit Testing

System Testing

Integration Testing

Responsibility

Generally, independent Software


Testers

Generally, Software Developers

Programming
Knowledge

Not Required

Required

Implementation
Not Required
Knowledge

Required

Basis for Test


Cases

Detail Design

Levels
Applicable To

Requirement Specifications

For a combination of the two testing methods, see Gray Box Testing.
Note: Job Descriptions might vary significantly from projects to projects or companies to
companies. And, mostly, one is required to do much more than what is stated in the Job
Description.

Designation
Senior Software Tester
Department
Quality Control
Line Manager
Software Test Manager
Job Aim
To supervise Software Testers and perform tests to ensure quality
Responsibilities
Supervision of Software Testers
Review of software requirements
Preparation/review of test plans
Preparation/review of test cases
Execution of tests
Reporting of defects
Preparation of test reports
Essential Skills
Proficiency in written and spoken English
Organized
Detailed
Skeptical
Advanced knowledge of computers
Teamwork
Desired Skills

Programming knowledge [This depends on whether a Black Box or a White Box


testing methodology is used.]
Leadership
Note: Job Descriptions might vary significantly from projects to projects or companies to
companies. And, mostly, one is required to do much more than what is stated in the Job
Description.

Designation
Software Tester
Department
Quality Control
Line Manager
Senior Software Tester
Job Aim
To perform software tests to ensure quality
Responsibilities
Review of software requirements
Preparation of test cases
Execution of tests
Reporting of defects

Preparation of test reports


Essential Skills
Proficiency in written and spoken English
Organized
Detailed
Skeptical
Basic knowledge of computers
Teamwork
Desired Skills
Programming knowledge [This depends on whether a Black Box or a White Box
testing methodology is used.]
TEST CASE Fundamentals
DEFINITION
A test case is a set of conditions or variables under which a tester will determine whether a
system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the requirements or
design of an application.
TEST CASE TEMPLATE
A test case can have the following elements. Note, however, that normally a test
management tool is used by companies and the format is determined by the tool used.
Test Suite ID

The ID of the test suite to which this test case belongs.

Test Case ID

The ID of the test case.

Test Case
Summary

The summary / objective of the test case.

Related
Requirement

The ID of the requirement this test case relates/traces to.

Prerequisites

Any prerequisites or preconditions that must be fulfilled prior to executing


the test.

Test Procedure Step-by-step procedure to execute the test.


Test Data

The test data, or links to the test data, that are to be used while
conducting the test.

Expected Result The expected result of the test.


Actual Result

The actual result of the test; to be filled after executing the test.

Status

Pass or Fail. Other statuses can be Not Executed if testing is not


performed and Blocked if testing is blocked.

Remarks

Any comments on the test case or test execution.

Created By

The name of the author of the test case.

Date of Creation The date of creation of the test case.


Executed By

The name of the person who executed the test.

Date of
Execution

The date of execution of the test.

Test Environment

The environment (Hardware/Software/Network) in which the test was


executed.

TEST CASE EXAMPLE / TEST CASE SAMPLE


Test Suite ID

TS001

Test Case ID

TC001

Test Case
Summary

To verify that clicking the Generate Coin button generates coins.

Related
Requirement

RS001

Prerequisites

1. User is authorized.
2. Coin balance is available.

1. Select the coin denomination in the Denomination field.


Test Procedure

2. Enter the number of coins in the Quantity field.


3. Click Generate Coin.

Test Data

Expected Result

1. Denominations: 0.05, 0.10, 0.25, 0.50, 1, 2, 5


2. Quantities: 0, 1, 5, 10, 20
1. Coin of the specified denomination should be produced if the
specified Quantity is valid (1, 5)
2. A message Please enter a valid quantity between 1 and 10 should
be displayed if the specified quantity is invalid.
1. If the specified quantity is valid, the result is as expected.

Actual Result

2. If the specified quantity is invalid, nothing happens; the expected


message is not displayed

Status

Fail

Remarks

This is a sample test case.

Created By

John Doe

Date of Creation 01/14/2020


Executed By

Jane Roe

Date of
Execution

02/16/2020

Test
Environment

OS: Windows Y
Browser: Chrome N

WRITING GOOD TEST CASES


As far as possible, write test cases in such a way that you test only one thing at a
time. Do not overlap or complicate test cases. Attempt to make your test cases
atomic.
Ensure that all positive scenarios and negative scenarios are covered.

Language:
o Write in simple and easy to understand language.
o Use active voice: Do this, do that.
o Use exact and consistent names (of forms, fields, etc).
Characteristics of a good test case:
o Accurate: Exacts the purpose.
o Economical: No unnecessary steps or words.
o Traceable: Capable of being traced to requirements.
o Repeatable: Can be used to perform the test over and over.
o Reusable: Can be reused if necessary.

180+ Sample Test Cases for Testing Web and Desktop


Applications Comprehensive Testing Checklist
This is a testing checklist for web and desktop applications.
Note This article is little long (over 2700 words). My goal is to share one of the
most comprehensive testing checklist ever written and this is not yet done. Ill keep
updating this post in future with more scenarios. If you dont have time to read it
now, please feel free to share with your friends and bookmark it for later.
Make testing checklist as an integral part of test cases writing process. Using this
checklist you can easily create hundreds of test cases for testing web or desktop
applications. These are all general test cases and should be applicable for almost all
kind of applications. Refer these tests while writing test cases for your project and
Im sure you will cover most testing types except the application specific business
rules provided in your SRS documents.
Though this is a common checklist, I recommend preparing a standard testing
checklist tailored to your specific needs using below test cases in addition with
application specific tests.

Importance of Using Checklist for Testing:


Maintaining a standard repository of reusable test cases for your application will
ensure the most common bugs will be caught more quickly.
Checklist helps to quickly complete writing test cases for new versions of the
application.
Reusing test cases help to save money on resources to write repetitive tests.
Important test cases will be covered always making it almost impossible to forget.
Testing checklist can be referred by developers to ensure most common issues
are fixed in development phase itself.
Few notes to remember:
1) Execute these scenarios with different user roles e.g. admin user, guest user etc.
2) For web applications these scenarios should be tested on multiple browsers like
IE, FF, Chrome, and Safari with versions approved by client.
3) Test with different screen resolutions like 1024 x 768, 1280 x 1024, etc.
4) Application should be tested on variety of displays like LCD, CRT, Notebooks,
Tablets, and Mobile phones.
4) Test application on different platforms like Windows, Mac, Linux operating
systems.
Comprehensive Testing Checklist for Testing Web and Desktop Applications:
Assumptions: Assuming that your application supports following functionality
Forms with various fields
Child windows
Application interacts with database
Various search filter criteria and display results
Image upload
Send email functionality
Data export functionality
General Test Scenarios
1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than
error messages style (e.g. using green color)

5. Tool tips text should be meaningful


6. Dropdown fields should have first entry as blank or text like Select
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record
add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than
specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting users first name should be
labeled properly as First Name
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check
application behavior after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of
hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to
make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation
message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in
quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

GUI and Usability Test Scenarios


1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned
properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error
messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and
grid info should be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on
these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to
cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on
page submit. User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size
limit
18. All buttons on page should be accessible by keyboard shortcuts and user should
be able to perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or
delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified

25. User should be able to select only one radio option and any combination for
check boxes.
Test Scenarios for Filter Criteria
1. User should be able to filter results using all parameters on the page
2. Refine search functionality should load search page with all user selected search
parameters
3. When there is at least one filter criteria is required to perform search operation,
make sure proper error message is displayed when user submits the page without
selecting any filter criteria.
4. When at least one filter criteria selection is not compulsory user should be able
to submit page and default search criteria should get used to query results
5. Proper validation messages should be displayed for invalid values for filter
criteria
Test Scenarios for Result Grid
1. Page loading symbol should be displayed when its taking more than default time
to load the result page
2. Check if all search parameters are used to fetch data shown on result grid
3. Total number of results should be displayed on result grid
4. Search criteria used for searching should be displayed on result grid
5. Result grid values should be sorted by default column.
6. Sorted columns should be displayed with sorting icon
7. Result grids should include all specified columns with correct values
8. Ascending and descending sorting functionality should work for columns
supported with data sorting
9. Result grids should be displayed with proper column and row spacing
10. Pagination should be enabled when there are more results than the default
result count per page
11. Check for Next, Previous, First and Last page pagination functionality
12. Duplicate records should not be displayed in result grid
13. Check if all columns are visible and horizontal scroll bar is enabled if necessary
14. Check data for dynamic columns (columns whose values are calculated
dynamically based on the other column values)
15. For result grids showing reports check Totals row and verify total for every
column
16. For result grids showing reports check Totals row data when pagination is

enabled and user navigates to next page


17. Check if proper symbols are used for displaying column values e.g. % symbol
should be displayed for percentage calculation
18. Check result grid data if date range is enabled
Test Scenarios for a Window
1. Check if default window size is correct
2. Check if child window size is correct
3. Check if there is any field on page with default focus (in general, the focus
should be set on first input field of the screen)
4. Check if child windows are getting closed on closing parent/opener window
5. If child window is opened, user should not be able to use or update any field on
background or parent window
6. Check window minimize, maximize and close functionality
7. Check if window is re-sizable
8. Check scroll bar functionality for parent and child windows
9. Check cancel button functionality for child window
-----------Database Testing Test Scenarios
1. Check if correct data is getting saved in database upon successful page submit
2. Check values for columns which are not accepting null values
3. Check for data integrity. Data should be stored in single or multiple tables based
on design
4. Index names should be given as per the standards e.g.
IND_<Tablename>_<ColumnName>
5. Tables should have primary key column
6. Table columns should have description information available (except for audit
columns like created date, created by etc.)
7. For every database add/update operation log should be added
8. Required table indexes should be created
9. Check if data is committed to database only when the operation is successfully
completed
10. Data should be rolled back in case of failed transactions
11. Database name should be given as per the application type i.e. test, UAT,
sandbox, live (though this is not a standard it is helpful for database maintenance)
12. Database logical names should be given according to database name (again this

is not standard but helpful for DB maintenance)


13. Stored procedures should not be named with prefix sp_
14. Check is values for table audit columns (like createddate, createdby,
updatedate, updatedby, isdeleted, deleteddate, deletedby etc.) are populated
properly
15. Check if input data is not truncated while saving. Field length shown to user on
page and in database schema should be same
16. Check numeric fields with minimum, maximum, and float values
17. Check numeric fields with negative values (for both acceptance and nonacceptance)
18. Check if radio button and dropdown list options are saved correctly in database
19. Check if database fields are designed with correct data type and data length
20. Check if all table constraints like Primary key, Foreign key etc. are implemented
correctly
21. Test stored procedures and triggers with sample input data
22. Input field leading and trailing spaces should be truncated before committing
data to database
23. Null values should not be allowed for Primary key column
Test Scenarios for Image Upload Functionality
(Also applicable for other file upload functionality)
1. Check for uploaded image path
2. Check image upload and change functionality
3. Check image upload functionality with image files of different extensions (e.g.
JPEG, PNG, BMP etc.)
4. Check image upload functionality with images having space or any other allowed
special character in file name
5. Check duplicate name image upload
6. Check image upload with image size greater than the max allowed size. Proper
error message should be displayed.
7. Check image upload functionality with file types other than images (e.g. txt, doc,
pdf, exe etc.). Proper error message should be displayed
8. Check if images of specified height and width (if defined) are accepted otherwise
rejected
9. Image upload progress bar should appear for large size images
10. Check if cancel button functionality is working in between upload process

11. Check if file selection dialog shows only supported files listed
12. Check multiple images upload functionality
13. Check image quality after upload. Image quality should not be changed after
upload
14. Check if user is able to use/view the uploaded images
Test Scenarios for Sending Emails
(Test cases for composing or validating emails are not included)
(Make sure to use dummy email addresses before executing email related tests)
1. Email template should use standard CSS for all emails
2. Email addresses should be validated before sending emails
3. Special characters in email body template should be handled properly
4. Language specific characters (e.g. Russian, Chinese or German language
characters) should be handled properly in email body template
5. Email subject should not be blank
6. Placeholder fields used in email template should be replaced with actual values
e.g. {Firstname} {Lastname} should be replaced with individuals first and last
name properly for all recipients
7. If reports with dynamic values are included in email body, report data should be
calculated correctly
8. Email sender name should not be blank
9. Emails should be checked in different email clients like Outlook, Gmail, Hotmail,
Yahoo! mail etc.
10. Check send email functionality using TO, CC and BCC fields
11. Check plain text emails
12. Check HTML format emails
13. Check email header and footer for company logo, privacy policy and other links
14. Check emails with attachments
15. Check send email functionality to single, multiple or distribution list recipients
16. Check if reply to email address is correct
17. Check sending high volume of emails
Test Scenarios for Excel Export Functionality
1. File should get exported in proper file extension
2. File name for the exported Excel file should be as per the standards e.g. if file
name is using timestamp, it should get replaced properly with actual timestamp at
the time of exporting the file

3. Check for date format if exported Excel file contains date columns
4. Check number formatting for numeric or currency values. Formatting should be
same as shown on page
5. Exported file should have columns with proper column names
6. Default page sorting should be carried in exported file as well
7. Excel file data should be formatted properly with header and footer text, date,
page numbers etc. values for all pages
8. Check if data displayed on page and exported Excel file is same
9. Check export functionality when pagination is enabled
10. Check if export button is showing proper icon according to exported file type
e.g. Excel file icon for xls files
11. Check export functionality for files with very large size
12. Check export functionality for pages containing special characters. Check if
these special characters are exported properly in Excel file
Performance Testing Test Scenarios
1. Check if page load time is within acceptable range
2. Check page load on slow connections
3. Check response time for any action under light, normal, moderate and heavy
load conditions
4. Check performance of database stored procedures and triggers
5. Check database query execution time
6. Check for load testing of application
7. Check for stress testing of application
8. Check CPU and memory usage under peak load condition
Security Testing Test Scenarios
1. Check for SQL injection attacks
2. Secure pages should use HTTPS protocol
3. Page crash should not reveal application or server info. Error page should be
displayed for this
4. Escape special characters in input
5. Error messages should not reveal any sensitive information
6. All credentials should be transferred over an encrypted channel
7. Test password security and password policy enforcement
8. Check application logout functionality
9. Check for Brute Force Attacks

10. Cookie information should be stored in encrypted format only


11. Check session cookie duration and session termination after timeout or logout
11. Session tokens should be transmitted over secured channel
13. Password should not be stored in cookies
14. Test for Denial of Service attacks
15. Test for memory leakage
16. Test unauthorized application access by manipulating variable values in browser
address bar
17. Test file extension handing so that exe files are not uploaded and executed on
server
18. Sensitive fields like passwords and credit card information should not have auto
complete enabled
19. File upload functionality should use file type restrictions and also anti-virus for
scanning uploaded files
20. Check if directory listing is prohibited
21. Password and other sensitive fields should be masked while typing
22. Check if forgot password functionality is secured with features like temporary
password expiry after specified hours and security question is asked before
changing or requesting new password
23. Verify CAPTCHA functionality
24. Check if important events are logged in log files
25. Check if access privileges are implemented correctly
Penetration testing test cases Ive listed around 41 test cases for penetration
testing on this page.
I d really like to thank Devanshu Lavaniya (Sr. QA Engineer working for I-link
Infosoft) for helping me to prepare this comprehensive testing checklist.
Ive tried to cover all standard test scenarios for web and desktop application
functionality. But still I know this is not a compete checklist. Testers on different
projects have their own testing checklist based on their experience.

Testare software

Testarea software reprezint o investigaie empiric realizat cu scopul de a oferi


prilor interesate informaii referitoare la calitatea produsului sau serviciului supus
testrii[1], lund n consideraie contextul operaional n care acesta din urma va fi
folosit. Testarea software pune la dispoziie o viziune obiectiv i independent
asupra produsului n dezvoltare, oferind astfel businessului posibilitatea de a
nelege i evalua riscurile asociate cu implementarea produsului soft. Tehnicile de
testare includ, dar nu sunt limitate la, procesul de execuie a programului sau
aplicaiei n scopul identificrii defectelor/erorilor de software. Testarea software mai
poate fi definit ca un proces de validare i verificare a faptului c un
program/aplicaie/produs software (1) corespunde business cerinelor i cerinelor
tehnice care au ghidat proiectarea i implementarea lui; i (2) ruleaz i se
comport corespunztor ateptrilor.
n dependen de metodologia de testare aleasa, testarea software poate fi
implementat la orice etap n cadrul procesului de dezvoltare, dei partea
considerabil a efortului de testare este aplicat de obicei la etapa de dup
cizelarea/formalizarea cerinelor i finisarea implementrii/codrii propriu-zise.
Cuprins
Generaliti
Nu exist un astfel de proces de testare ce ar permite identificarea tuturor
defectelor posibile ce le poate conine un produs software. n schimb, un astfel de
proces poate furniza o viziune critic sau comparativ, care vine sa contrapun
unele aspecte ale produsului (cum ar fi: starea i comportamentul actual) i
metricile i constrngerile care servesc drept criterii de acceptan. Aceste criterii
pot fi derivate din specificaii tehnice, produse asemntoare comparabile, versiuni
anterioare ale aceluiai produs, ateptari fa de produs enunate de ctre utilizator
sau client, standarde relevante, legi n vigoare, etc.
Fiecare produs software are o audien caracteristic. De exemplu, audiena pentru
un produs din industria jocurilor video este complet diferit de audiena unui produs
din sistemul financiar-bancar. De aceea, cnd o organizaie dezvolt sau investete
n dezvoltarea unui produs software, ea trebuie s fie capabil s evalueze
acceptabilitatea produsului din perspectiva utilizatorilor finali, audienei int,
cumprtorilor i altor pri interesate. Testarea software este procesul care vine sa
fac aceast evaluare posibil ntr-un mod ct mai obiectiv.
Un studiu efectuat n anul 2002 de ctre NIST a artat c pierderile anuale pentru
economia S.U.A. cauzate de defecte de software se estimeaz la 59.5 miliarde de
dolari. Mai mult de o treime din aceste pierderi ar putea fi evitate dac s-ar face
investiii mai serioase n implementarea procesului de testare. [2]
Istoric
Gelperin i Hetzel au analizat evoluia conceptului de testare a unui sistem
informatic i au mprit evoluia acestuia n mai multe etape, n funcie de filozofia
care a stat la baza conceptului:

1945 - 1956 - Orientarea spre depanare

Testarea programelor informatice este o activitate care a aprut o dat cu procesul


de dezvoltare a acestora. n perioada apariiei primelor sisteme informatice - 19451956, procesul de testare era n special orientat ctre componentele hardware ale
sistemului, iar defectele din software erau n general considerate ca fiind mai puin
importante. Persoanele care elaborau codul se ocupau i de partea de testare, dei
nu o fceau ntr-o manier metodic, i denumeau aceast activitate verificare.
Pentru muli dintre oamenii de tiin care se ocupau de scrierea programelor
informatice nu exista o distincie clar ntre activitile de scriere a codului, testare
i depanare. Din aceast perioad timpurie dateaz prima referire la conceptul de
"bug" ntr-un sistem informatic, cnd un operator al unui calculator descoper pe un
circuit electronic care funciona defectuos o molie, i o ataeaz n jurnalul de
operaiuni, cu meniunea ca a descoperit primul "bug" propriu-zis ntr-un calculator.
Termenul de "bug" n sine este mai vechi, datnd din perioada lui Thomas Edison.
Primul savant care s-a ocupat de noiunea de testare a unui program este Alan
Turing care public n 1949 un articol despre principiile teoretice ale verificrii
corectitudinii funcionrii unui program. n 1950, Turing public un alt articol, n care
ridic problema inteligenei artificiale, i definete un test pe care sistemul
informatic trebuie s l treac, i anume rspunsurile oferite de ctre acesta la
ntrebrile adresate de un operator (testerul) s nu poat fi difereniate de ctre
rspunsurile oferite de un om (etalonul). Aceast lucrare poate fi considerat a fi
prima care se ocup de conceptul de testare, considerat distinct de activitile de
elaborare a codului respectiv de depanare.

1957 - 1978 - Orientarea spre demonstraie[3]

Pe msur ce sistemele informatice creteau n numr, complexitate i cost,


procesul de testare a cptat o importan tot mai mare. Testarea pe scar larg a
devenit necesar datorit creterea impactului economic pe care defectele
nedetectate le puteau avea. De asemenea, persoanele implicate n dezvoltarea de
programe informatice au devenit mai contiente de riscurile asociate cu defectele
din programe i au nceput s pun mai mult accent pe testarea i remedierea
defectelor nainte ca acestea s afecteze produsul livrat. n aceast perioad,
termenii de testare i depanare se suprapuneau i se refereau la eforturile fcute n
scopul descoperirii, identificrii i remedierii defectelor din sistemele informatice.
Scopul procesului de testare era demonstrarea corectitudinii funcionrii
programului, adic absena erorilor.

1979 - 1982 - Orientare spre defectare"[4]

Conceptele de depanare i testare devin mai riguros separate o dat cu publicarea


de ctre Glenford J. Myers a lucrrii The Art of Software Testing, n 1979. Myers face
distincia dintre depanare care este o activitate care ine de dezvoltarea
programului i testarea care este activitatea de rulare a unui program cu scopul
declarat de a descoperi erori. De asemenea, el susine c n testarea bazat pe
demonstraie exist pericolul ca operatorul s aleag n mod incontient acel set de
parametri care ar asigura funcionarea corect a sistemului, ceea ce creeaz
pericolul ca multe defecte s treac neobservate. Myers propune n abordarea sa i

o serie de activiti de analiz i control care mpreun cu procesul de testare s


creasc nivelul de calitate a sistemelor informatice.

1983 - 1987 - Orientarea spre evaluare[5]

n 1983, Biroul Naional de Standarde din Statele Unite ale Americii public un set
de practici adresate activitilor de verificare, validare i testare a programelor de
calculator. Aceast metodologie, adresat n mod specific instituiilor americane
federale, cuprinde metode de analiz, evaluare i testare care s fie aplicate de-a
lungul ciclului de via al aplicaiei. Ghidul de bune practici sugereaz alegerea unor
diverse metode de verificare i validare, n funcie de caracteristicile fiecrui proiect
n scopul creterii calitii generale a produsului. n anii '70 nivelul de profesionalism
al persoanelor implicate n activitatea de testare a crescut simitor. Apar posturile
dedicate de tester, manager de teste sau analist de teste. Apar de asemenea
organizaii profesionale ale celor care activeaz n domeniul testrii software,
precum i publicaii specializate, cri i articole de specialitate. Mai important,
instituiile americane ANSI i IEEE ncep elaborarea unor standarde care s
formalizeze procesul de testare, efort concretizat n standarde precum ANSI IEEE
STD 829, n 1983, care stabilea anumite formate care s fie utilizate pentru crearea
documentaiei de testare.

1988 - n prezent - Orientarea spre prevenire [6]

Standardele precedente sunt dezvoltate i mbuntite ncepnd cu 1987 cnd


IEEE public o metodologie comprehensiv care se aplic n toate fazele ciclului de
via a programului. Testarea trebuie fcut n toate fazele de lucru, n paralel cu
programarea i are urmtoarele activiti principale: planificare, analiz, proiectare,
implementare, execuie i ntreinere. Respectarea acestei metodologii duce la
scderea costurilor de dezvoltare i de ntreinere a unui sistem prin scderea
numrului de defecte care ajung nedetectate n produsul final.
Metodologii moderne
Metodologiile moderne, precum Agile, Scrum, eXtreme Programming i altele pun
un accent deosebit pe procesul de testare pe care l integreaz n profunzime cu
celelalte activiti care in de dezvoltarea programelor informatice: planificare,
proiectare, programare, evaluare i control.
Subiectele testrii software
Scopul
Scopul primordial pentru procesul de testare este (1) identificarea erorilor de
software, (2) izolarea i fixarea/corectarea defectelor care au cauzat aceste erori.
Deseori este un exerciiu non-trivial, deoarece testarea nu poate demonstra cu
certitudine de 100% ca produsul funioneaz corect n orice condiii; testarea doar
poate demonstra c produsul nu funcioneaz corect n anumite condiii. [7] n scopul
testrii deseori se include att examinarea static a codului surs, ct i
examinarea codului n execuie n diferite mprejurri i sub diferite condiii.
Aspectele de cod ce rmn sub ochiul vigilent al test/software inginerului n timpul

acestui exerciiu sunt (1) codul face lucrul care este presupus sa-l fac i (2) codul
face lucrul care trebuie sa-l fac. Informaia obinut n urma procesului de testare
poate fi folosit pentru corectarea i mbuntirea procesului conform cruia se
dezvolt produsul software.[8]
Defecte i euri
Nu toate defectele software sunt cauzate de greeli n cod. O surs rspndit de
defecte costisitoare sunt lacunele i neclaritile la nivel de cerine, de exemplu,
cerinele "ascunse" sau incomplete pot s rezulte ntr-un set de erori introduse nc
n faza de proiectare de ctre designerul programului. [9] Cerinele nonfuncionale precum ar
fitestabilitatea, scalabilitatea, mentenabilitatea, usabilitatea, performana i securit
atea, sunt o surs raspndit de astfel de erori.
Defectele software se manifest ca rezultat al urmtorului proces: un programator
comite o eroare (greeal), care la rndul ei rezult ntr-un defect (bug) la nivel
de codul sursal programului; dac acest defect este executat, n anumite condiii
sistemul va produce un rezultat greit, ceea ce ulterior va duce la o euare a
programului.[10] Nu toate defectele pot duce la euarea programului. De exemplu,
defectele ce se conin ntr-o seciune de cod "mort" niciodat nu vor duce la
euarea programului. Defectele se pot manifesta ca euri la schimbarea
mprejurimii n care ruleaz programul. Exemplele de astfel de modificri includ:
trecerea pe o platform hardware nou, alterri n sursa de date sau interaciunea
cu software diferit.[10] Un singur defect poate rezulta ntr-un spectru larg de
simptome prin care se manifest cderile.
Compatibilitatea
Deseori aplicaiile software cad din cauza problemelor de compatibilititate cauzate
att de interaciunea lor cu alte aplicaii sau sisteme de operare, ct i de nonconformitile ce apar de la o versiune a programului la alta ntr-un proces
inceremental de dezvoltare a produsului. Incompatibilitile ce apar ntre versiuni se
datoreaz faptului c la momentul scrierii codului programatorul a considerat, sau a
testat, produsul doar pentru un singur sistem de operare (sau un set restrns de
sisteme de operare), far a lua n calcul problemele ce pot aprea la schimbarea
contextului de execuie. Un rezultat nedorit al acestui fapt poate fi urmtorul: ultima
versiune a programului poate s nu mai fie compatibil cu acea combinaie de
software/hardware folosit mai devreme, sau poate s nu mai fie compatibil cu un
alt sistem, compatibilitate extrem de important. Deci, testarea de compatibilitate
este o "strategie orientat spre prevenire", fapt ce o asociaz clar cu ultima din
fazele de testare propuse de Dave Gelperin i William C. Hetzel [6].
Combinri de date de intrare i precondiii
O problema fundamental a testrii software a fost i rmne imposibilitatea de a
testa un produs utiliznd toate combinaiile posibile de date de intrare i precondiii
(starea iniial). Aceasta este adevrat chiar i n cazul produselor simple
What are 5 common problems in the software development process?

poor requirements or user stories - if these are unclear, incomplete, too


general, or not testable, there may be problems.

unrealistic schedule - if too much work is crammed in too little time, problems
are inevitable.

inadequate testing - no one may know whether or not the software is any
good until customers complain or systems crash.

misunderstandings about dependencies.

miscommunication - if developers don't know what's needed or stakeholders


have erroneous expectations, problems can be expected.

What are 5 common solutions to software development problems?

solid requirements/user stories - clear, complete, appropriately detailed, cohesive, attainable, testable
specifications that are agreed to by all players. In 'agile'-type environments, continuous close
coordination with product owners or their representatives is necessary to ensure that
changing/emerging requirements are understood.

realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes,
and documentation; personnel should be able to complete the project without burning out, and be
ableto work at a sustainable pace.

adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for
testing and bug-fixing. 'Early' testing could include static code analysis/testing, test-first development,
unit testing by developers, built-in testing and diagnostic capabilities, etc. Automated testing can
contribute significantly if effectively designed and implemented as part of an overall testing strategy.

stick to initial requirements where feasible - be prepared to defend against excessive changes and
additions once development has begun, and be prepared to explain consequences. If changes are
necessary, they should be adequately reflected in related schedule changes. If possible, work closely
with customers/end-users to manage expectations. In agile environments, requirements may change
often, requiring that true agile processes be in place and followed.

communication - require walkthroughs/inspections/reviews when appropriate; make extensive use of


group communication tools - groupware, wiki's, bug-tracking tools, change management tools,
audio/video conferencing, etc.; ensure that information/documentation/user stories are available, upto-date, and appopriately detailed; promote teamwork and cooperation; use prototypes, frequent
deliveries, and/or continuous communication with end-users if possible to clarify expectations. In
effective agile environments most of these should be taking place.

What is software 'quality'?


Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the 'customer' is and their overall influence in the scheme
of things. A wide-angle view of the 'customers' of a software development project might include
end-users, customer acceptance testers, customer contract officers, customer management, the

development organization's management/accountants/testers/salespeople, future software


maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will
have their own slant on 'quality' - the accounting department might define quality in terms of
profits while an end-user might define quality as user-friendly and bug-free.

What is 'good code'?


'Good code' is code that works, is reasonably bug free, secure, and is readable and
maintainable. Some organizations have coding 'standards' that all developers are
supposed to adhere to, but everyone has different ideas about what's best, or what
is too many or too few rules. There are also various theories and metrics, such as
McCabe Complexity metrics. It should be kept in mind that excessive use of
standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy
checks' pair programming, code analysis tools, etc. can be used to check for
problems and enforce standards.
For example, in C/C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:

minimize or eliminate use of global variables.

use descriptive function and method names - use both upper and lower case,
avoid abbreviations, use as many characters as necessary to be adequately
descriptive (use of more than 20 characters is not out of line); be consistent
in naming conventions.

use descriptive variable names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately
descriptive (use of more than 20 characters is not out of line); be consistent
in naming conventions.

function and method sizes should be minimized; less than 100 lines of code is
good, less than 50 lines is preferable.

function/method descriptions should be clearly spelled out in comments


preceding a function's/method's code.

organize code for readability.

use whitespace generously - vertically and horizontally

each line of code should contain 70 characters max.

one code statement per line.

coding style should be consistent throughout a program (e.g., use of


brackets, indentations, naming conventions, etc.)

in adding comments, err on the side of too many rather than too few
comments; a common rule of thumb is that there should be at least as many
lines of comments (including header blocks) as lines of code.

no matter how small, an application should include documentation of the


overall program function and flow (even a few paragraphs is better than
nothing); or if possible a separate flow chart and detailed program
documentation.

make extensive use of error handling procedures and status and error
logging.

for C++, to minimize complexity and increase maintainability, avoid too


many levels of inheritance in class hierarchies (relative to the size and
complexity of the application). Minimize use of multiple inheritance, and
minimize use of operator overloading (note that the Java programming
language eliminates multiple inheritance and operator overloading.)

for C++, keep class methods small, less than 50 lines of code per method is
preferable.

for C++, make liberal use of exception handlers

What is 'good design'?


'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal
design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and
maintainable; is robust with sufficient error-handling and status logging capability; and works as expected when
implemented. Good functional design is indicated by an application whose functionality can be traced back to
customer and end-user requirements or user stories. (See further discussion of functional and internal design in
FAQ 'What's the big deal about requirements?'). For programs that have a user interface, it's often a good idea
to assume that the end user will have little computer knowledge and may not read a user manual or even the
on-line help; some common rules-of-thumb include:

the program should act in a way that least surprises the user

it should always be evident to the user what can be done next and how to exit

the program shouldn't let the users do something stupid without warning them.

What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense
Department to help improve software development processes.

CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'),
developed by the SEI and as of January 2013 overseen by the CMMI Institute at Carnegie Mellon
University. In the 'staged' version, it's a model of 5 levels of process 'maturity' that help determine
effectiveness in delivering quality software. CMMI models are "collections of best practices that help
organizations to improve their processes." It is geared to larger organizations such as large U.S.
Defense Department contractors. However, many of the QA processes involved are appropriate to any
organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by
undergoing assessments by qualified auditors. CMMI V1.3 (2010) also supports Agile development
processes.

ISO = 'International Organisation for Standardization' - The ISO 9001:2015 standard (the latest in the
periodically-updated ISO standard) concerns quality systems that are assessed by outside auditors,
and it applies to many kinds of production and manufacturing organizations, not just software. It covers
documentation, design, development, production, testing, installation, servicing, and other processes.
The full set of standards consists of: (a)ISO 9001-2015 - Quality Management Systems:
Requirements; (b)ISO 9000-2015 - Quality Management Systems: Fundamentals and Vocabulary;
(c)ISO 9004-2009 - Quality Management Systems: Guidelines for Performance Improvements. (d)ISO
19011-2011 - Guidelines for auditing management systems. To be ISO 9001 certified, a third-party
auditor assesses an organization, and certification is typically good for about 3 years, after which a
complete reassessment is required. Note that ISO certification does not necessarily indicate quality
products - it indicates only that documented processes are followed. There are also other softwarerelated ISO standards such as ISO/IEC 25010:2011 which includes a 'quality in use model' composed
of five characteristics and a 'product quality model' that covers eight main characteristics of software.
Also see http://www.iso.org/ for the latest information. In the U.S. the standards can also be purchased
via the ASQ web site at http://asq.org/quality-press/
ISO/IEC 25010 is a software quality evaluation standard that defines (a) a 'quality in use model' of five
characteristics that relate to the outcome of interaction when a product is used in a particular context of
use, and (b) a 'product quality model' composed of eight characteristics that relate to static properties
of software and dynamic properties of the computer system.
ISO/IEC/IEEE 29119 series of standards for software testing.
ISO/IEC/IEEE 29119-1: Concepts & Definitions (published Sept. 2013)
ISO/IEC/IEEE 29119-2: Test Processes (published Sept. 2013)
ISO/IEC/IEEE 29119-3: Test Documentation (published Sept. 2013)
ISO/IEC/IEEE 29119-4: Test Techniques (expected publication late 2014)
ISO/IEC/IEEE 29119-5: Keyword Driven Testing (expected publication 2015)

IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such
as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of
Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance
Plans' (IEEE/ANSI Standard 730), and others.

ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.;
publishes some software-related standards in conjunction with the IEEE and ASQ (American Society
for Quality).

What is the 'software life cycle'?


The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes
aspects such as initial concept, requirements analysis, functional design, internal design, documentation
planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting,
phase-out, agile sprints, and other aspects.

SOFTWARE TESTING JOKES: The following jokes related to software testing have
been compiled from forwarded emails and internet resources. Thanks to the ones
who thought of them first.

The Height Of A Flagpole


A group of managers were given the assignment of measuring the height of a
flagpole. So they go out to the flagpole with ladders and tape measures and theyre

struggling to get the correct measurement; dropping the tape measures and falling
off the ladders.
A tester comes along and sees what theyre trying to do, walks over, pulls down the
flagpole, lays it flat, measures it from end to end, gives the measurement to one of
the managers and walks away.
After the tester is gone, one manager turns to another and laughs, Isnt that just
like a tester? Were looking for the height and he gives us the length.

Damage Testing
The Aviation Department had a unique device for testing the strength of windshields
on airplanes. The device was a gun that launched a dead chicken at a planes
windshield at approximately the speed the plane flies. The theory was that if the
windshield does not crack from the impact of the chicken, it will survive a real
collision with a bird during flight.
The Railroad Department heard of this device and decided to use it for testing a
windshield on a locomotive they were developing.
So the Railroad Department borrowed the device, loaded a chicken and fired at the
windshield of the locomotive. The chicken not only shattered the windshield but also
went right through and made a hole on the back wall of the engine cab the
unscathed chickens head popping out of the hole. The Railroad Department was
stunned and contacted the Aviation Department to recheck the test to see if
everything was done correctly.
The Aviation Department reviewed the test thoroughly and sent a report. The report
consisted of just one recommendation and it read Use a thawed chicken.

A Testers Courage
The Director of a software company proudly announced that a flight software
developed by the company was installed in an airplane and the airlines was offering
free first flights to the members of the company. Who are interested? the Director
asked. Nobody came forward. Finally, one person volunteered. The brave Software
Tester stated, I will do it. I know that the airplane will not be able to take off.

Light Bulb
Question: How many testers does it take to change a light bulb?
Answer: None. Testers do not fix problems; they just find them.
Question: How many programmers does it take to change a light bulb?
Answer1: Whats the problem? The bulb at my desk works fine!
Answer2: None. Thats a hardware problem.

The Glass

To an optimist, the glass is half full.

To a pessimist, the glass is half empty.

To a good tester, the glass is twice as big as it needs to be.

Testing Definition
To tell somebody that he is wrong is called criticism. To do so officially is called
testing.

Sign On Testers Doors


Do not disturb. Already disturbed!

Words
Developer: There is no I in TEAM
Tester: We cannot spell BUGS without U

Experience Counts
There was a software tester who had an exceptional gift for finding bugs. After
serving his company for many years, he happily retired. Several years later, the
company contacted him regarding a bug in a multi-million-dollar application which
no one in the company was able to reproduce. They tried for many days to replicate
the bug but without success.
In desperation, they called on the retired software tester and after much persuasion
he reluctantly took the challenge.
He came to the company and started studying the application. Within an hour, he
provided the exact steps to reproduce the problem and left. The bug was then fixed.
Later, the company received a bill for $50,000 from the software tester for his
service. The company was stunned with the exorbitant bill for such a short duration
of service and demanded an itemized accounting of his charges.
The software tester responded with the itemization:

Bug Report: $1

Knowing where to look: $49,999

Sandwich
Two software testers went into a diner and ordered two drinks. Then they produced
sandwiches from their briefcases and started to eat. The owner became quite
concerned and marched over and told them, You cannot eat your own sandwiches
in here!
The testers looked at each other, shrugged their shoulders and then exchanged
sandwiches.

Signs That Youre Dating A Tester

Your love letters get returned to you marked up with red ink, highlighting your
grammar and spelling mistakes.

When you tell him that you wont change something he has asked you to
change, hell offer to allow you two other flaws in exchange for correcting this
one.

When you ask him how you look in a dress, hell actually tell you.

When you give him the Its not you, its me breakup line, hell agree with
you and give the specifics.

He wont help you change a broken light bulb because his job is simply to
report and not to fix.

Hell keep bringing up old problems that youve since resolved just to make
sure that theyre truly gone.

In the bedroom, he keeps probing the incorrect inputs.

Who Is Who

A Project Manager is the one who thinks 9 women can deliver a baby in 1
month.

An Onsite Coordinator is the one who thinks 1 woman can deliver 9 babies in
1 month.

A Developer is the one who thinks it will take 18 months to deliver 1 baby.

A Marketing Manager is the one who thinks he can deliver a baby even if no
man and woman are available.

A Client is the one who doesnt know why he wants a baby.

A Tester is the one who always tells his wife that this is not the right baby.

Programmer Responses
Some sample replies that you get from programmers when their programs do not
work:

It works fine on MY computer

It worked yesterday.

It must be a hardware problem.

What did you type in wrong to get it to crash?

You must have the wrong version.

Somebody must have changed my code.

Why do you want to do it that way?

I thought I fixed that.

Assessment Of An Opera
A CEO of a software company was given a ticket for an opera. Since he was unable
to go, he passed the invitation to the companys Quality Assurance Manager.
The next morning, the CEO asked him how he enjoyed it, and he was handed a
report, which read as follows:
For a considerable period, the oboe players had nothing to do. Their number should
be reduced, and their work spread over the whole orchestra, thus avoiding peaks of
inactivity. All twelve violins were playing identical notes. This seems unnecessary
duplication, and the staff of this section should be drastically cut. If a large volume
of sound is really required, this could be obtained through the use of an amplifier.
Much effort was involved in playing the demi-semiquavers. This seems an excessive
refinement, and it is recommended that all notes be rounded up to the nearest
semiquaver. No useful purpose is served by repeating with horns the passage that
has already been handled by the strings. If all such redundant passages were
eliminated, the concert could be reduced from two hours to twenty minutes.

The Search
Under a streetlight, on a very dark night, a software tester was looking for a set of
lost keys.
A policeman came by, asked him about the object of his search, and joined him to
help. After the two had searched for some time, the policeman asked, Are you sure
you lost them here?

Oh, no, said the software tester. I lost the keys somewhere else.
Then why are you looking for them over here? the policeman asked.
Because this is where the light is! the software tester replied.
Moral: Do not be so stupid that you search for bugs only at the obvious places.

Disney Password
A person with a developer background was hired as a software tester and assigned
to a Disney website project. On reviewing his test data for the login feature, it was
found that he had MickeyDonaldGoofyPluto for the password field. Amused, his
manager asked him why.
It says the password needs to have at least four characters. he replied.

Food Testing

THE GAG TEST: Anything that makes you gag is spoiled.

EGGS: When something starts pecking its way out of the shell, the egg is
probably past its prime.

DAIRY PRODUCTS: Milk is spoiled when it starts to look like yogurt. Yogurt is
spoiled when it starts to look like cottage cheese. Cottage cheese is spoiled
when it starts to look like regular cheese. Regular cheese is nothing but
spoiled milk anyway and cant get any more spoiled than it is already.

MEAT: If opening the refrigerator door causes stray animals from a threeblock radius to congregate outside your house, the meat is spoiled.

BREAD: Fuzzy and hairy looking white or green growth areas are a good
indication that your bread has turned into a pharmaceutical laboratory
experiment.

CANNED GOODS: Any canned goods that have become the size or shape of a
softball should be disposed of.

GENERAL RULE OF THUMB: Most food cannot be kept longer than the average
life span of a hamster. Keep a hamster in or nearby your refrigerator to gauge
this.

Tickle Me Toys
There is a factory that makes Tickle Me toys. The toy laughs when you tickle it
under the arms.

Jane is hired at the factory and she reports for her first day promptly. The next day
there is a knock at the Personnel Managers door. The Foreman throws open the
door and begins to rant about the new employee. He complains that she is
incredibly slow and the whole line is delayed, putting the entire production line
behind schedule.
The Personnel Manager decides he should see this for himself, so the two men
march down to the factory floor. When they get there the line is so backed up that
there are Tickle Me toys all piling up. At the end of the line stands a nervous Jane
surrounded by mountains of Tickle Me toys.
She has a roll of thread and a huge bag of small marbles. The two men watch in
amazement as she cuts a little piece of thread, wraps it around two marbles and
begins to carefully sew the little package between the toys legs.
The Personnel Manager bursts into laughter. After several minutes of hysterics he
pulls himself together, approaches Jane and says Im sorry, Jane, but I think you
misunderstood the instructions I gave you yesterday. Your job was to give the toys
two test tickles each.

Software Release Day


Damn! Its a Software Release day on a Friday.
02:00PM

Developers provide the last build with the fixes to 2 critical bugs.

Testers run a smoke test and find that a major feature is missing. Normally,
the build is not accepted if the smoke test fails, but they continue.

It is found that one of the critical bugs is not fixed. Instead 3 more minor bugs
have been introduced. Luckily, another critical bug is verified as fixed.

There is no time for regression test.

07:00PM

Developers want to go home but cant.

Testers want to go home but cant.

Developers argue that the 3 minor bugs are not bugs but enhancement
requests and that the missing major feature will not be noticed by the endusers.

Project Manager says that they will be mentioned as known issues but the
critical bug needs to be fixed anyhow.

10:00PM

Developers provide the really last build with the fix for the critical bug and
go home.

Testers have no time to run smoke test or regression tests. They just verify
the fix for the critical bug.

11:00PM

An email is received from the Account Manager, Its about to be afternoon


here and we promised the client a delivery this morning. Where is the
release?

11:58PM

Testers reluctantly sign-off the build and go home.

Project Manager makes the release, deliberately missing the mentioning of


the known issues.

Next Day

Guess what!

Morale-O-Meter of a Software Tester


A Software Testers morale is as stable as a wave in an ocean.

Tilt your head and see the happy person.


A sample day in the life of a software tester.

8:00 AM: Reach office; Smile at everyone; Finish morning office chores

9:00 AM: Test

10:00 AM: Draw cartoons during project meeting

11:00 AM: Test; Argue with them; Randomly click the mouse button multiple
times

12:00 PM: Conduct Bland Taste at the canteen

1:00 PM: Test; Document; Test

2:00 PM: Doubt peoples commitment to quality; Get upset

3:00 PM: Test; Rest; Test

4:00 PM: Receive the so-called-final build; Curse for the delay; Conduct Sulk
Test

5:00 PM: Test; Protest against the decision to make a release despite such
poor quality

6:00 PM: Detest

7:00 PM: Leave office; Go home/bar

8:00 PM: Kill thousands in WoW/Get drunk

Sorry if we scared the newbies. Take it easy, folks.


Software Testing Jobs: They are not so easy!
So you have decided to enter into or build a career in the field of software testing?
Great! We provide you some answers/links to your basic queries.
What qualities do I need to possess to enter into / survive in the software
testing field?
Entering into is easier than surviving in any field. In addition to all the good qualities
required of an employee, you will need to be intelligent, detailed, organized,
skeptical and tough. The competition is tight and you have to have an edge all the
time. The playing field is rough and you have to have enough courage to stay
smiling. If this has already scared you, a software testing job is not your forte your
place is somewhere else.
What will my work be like?
This field is surprisingly vast and your specific job responsibilities differ from
positions and companies. However, the crux of your job will mainly
involve testing software (Surprised?).
You can view the following sample job descriptions:

Software Tester Sample Job Description

Senior Software Tester Sample Job Description

Software Test Manager Sample Job Description

There are many other specialized positions in both Manual Testing, Automated
Testing and Test Management.
What much will I earn?
Yes, money does matter and this field does pay well (not very very well but well
enough). The following sites provide you with a rough idea. However, remember
that money is secondary to self-satisfaction.

ITJobsWatch.co.uk

SoftwareTestingInstitute.com

Indeed.com

Payscale.com

HotJobs.Yahoo.com

Where can I find software testing job openings?


There are so many sites out there that the list here is minimal. You can always do an
internet search for an exhaustive list. Remember that one need not always wait for
job vacancies to submit an application; you can always submit an expression of
interest to a company and hope that you will be kept in a pool for future fulfillment.
Jobs, like fortunes, seldom fall on your lap from the sky you need to be proactive.

QAJobs.net

SoftwareTestingJobs.net

SoftwareTestingJobs.com

TestingJobz.com

Indeed.com

JobCentral.com

Monster.com

HotJobs.Yahoo.com

Jobs.com

ITJobs.com

ITJobs.net

ComputerJobs.com

I am a fresher, what should I do?


Study the basics of software testing. Read some articles on the basic concepts of
software testing in this site or browse through Wikipedia or some good internet
resource or book. Do not go to the interview without knowing the differences
between quality assurance and quality control. Make some conscious effort to
understand the fundamentals of the software testing domain.
I have some IT experience, what should I do?
Great. All that you need to do now is what we have mentioned above for a fresher if
you havent done so already. Then, in your CV, highlight your technical expertise.
Also, a few achievements on quality will be to your advantage. However, do not
claim something you do not have a right to. Bluffing will only take you so far as the
first interview appointment.

What should I watch out for?

If the management of the company is not very much concerned about quality
or does not know much about it, then you and your team might have a tough
time.

If you and your team will have to rely on developers for everything, then
there is something wrong. Instead, the developers should be relying on you
for product knowledge, requirements understanding etc.

If the company does not have some sort of quality assurance in place, your
efforts on quality control will be either worthless or extremely painful.

If you or the company thinks that software testing is for the non-performers,
then all of you are doomed.

Who should NOT apply for a software testing job?

If you have poor communication skills (you will need to do a lot more of
writing and arguing than you can imagine)

If you do not like minute work (you will have to investigate things through a
magnifying glass while keeping a birds eye view at the same time)

If you are impatient and wish to see results every minute or so (you will have
to do a lot of waiting and cursing in this field)

If you did not read this article word by word (If this sentence caught your eye
while you were skimming, then your satisfaction score in a software testing
job will be just 2.5 out of 100)

If you are, in fact, interested to be a developer but wish to apply as a


software tester (thinking that entry will be easier that way)

If you are chickenhearted (no offense to chickens)

Who best fits in a software testing role?

If you like fishing.

If you like dishing.

If you like thinking.

How to prepare a CV / Resume?


Okay, so you have done your initial research, you are convinced and you have
decided to apply. You will most probably be required to submit a CV. Though there is
no universally accepted format or way of writing a CV, the following points will help
you with the basics:

Purpose: The purpose of your CV is not to get you the job but an interview.
When competition is high, the quality of your CV determines your destiny.

Perspective: You are not writing a CV for yourself but for the reader. So, write
what the intended reader needs and wishes to see in your CV.

Honesty: Do not claim something you do not possess. You might be offered an
interview and then the job as well but you will not survive for long. Moreover,
you will live and die guilty.

Photo: Not recommended, especially if you are not photogenic.

Personal Details: Name, date of birth, gender, nationality, address, phone,


email. No need to mention your fax number, fathers name, mothers name,
marital status, sexual orientation etc.

Achievements: Make them short statements and relate them to the job you
are applying for if you can and if true.

Employment Details: Begin with your current or most recent job and work
backwards.

Qualifications: List the recent qualifications (educational/professional) first.

Skills: Highlight your professional/technical/language/other skills.

Hobbies: No need to mention this, especially if you are into stamp collection.

Length: About 2 pages is fine. People do not have the time and patience for
tomes these days.

Margins: Neither too deep nor too narrow. About 1 inch is fine.

Fonts: Do not use too many. Preferably, use a single font. Two at the max.

Font Type: Do not use fancy/uncommon fonts. Use common fonts such as
Arial or Times New Roman.

Text Size: Do not use too large or too small size. 10pt to 12pt is fine.

Text Styles: Use Bold, Italic and Underline very sparingly.

Color: Use a single color, preferably black. Or else, two colors at the max. Do
not use shades; they do not print well.

Language: Do not make any spelling errors, grammatical errors or typos.


Make use of the spelling/grammar tools present in word processing
applications. Avoid flowery or superfluous language; make it simple and
direct.

Paragraphs: Avoid long paragraphs (and sentences). Use bulleted points if


necessary.

Objectivity: Avoid subjective references like I am smart, hardworking,


confident, etc. No need to mention something which is expected, by default,
of a candidate.

Let your CV shine and you are already a step closer to your software testing job.
Good Luck!

You might also like