You are on page 1of 26

What is a Test Case?

A test case is a set of conditions or variables and inputs that are developed for a
particular goal or objective to be achieved on a certain application to judge its
capabilities or features.
It might take more than one test case to determine the true functionality of the
application being tested. Every requirement or objective to be achieved needs at
least one test case. Some software development methodologies like Rational Unified
Process (RUP) recommend creating at least two test cases for each requirement or
objective; one for performing testing through positive perspective and the other
through negative perspective.
SDLC
Software development comprises of various phases which are collectively called as
Software Development Life Cycle (SDLC). At every step the software evolves into a
more advanced product. Software testing happens in the later stage of SDLC.
During this phase, the product is checked for errors as well as it is verified against
the user requirements. Software bug is defined as any error, flaw, fault, mistake
which prevents the software from producing the expected output. Bug is often the
result of human errors either in the source code or the design itself. From its
discovery to its resolution, a bug passes through various stages, commonly called
as 'bug life cycle'.
Life Cycle of a Bug
Given below are the stages of a bug life span. Test reports describe in detail the
behavior of bug at each stage.
New
This is the first stage of bug life cycle in which the tester reports a bug. The
presence of the bug becomes evident when the tester tries to run the newly
developed application and it does not respond in an expected manner. This bug is
then send to the testing lead for approval.
Open
When the bug is reported to the testing lead, he examines the bug by retesting the
product. If he finds that the bug is genuine, he approves it and changes its status to
'open'.
Assign
Once the bug has been approved and found genuine by the testing lead, it is then
send to the concerned software development team for its resolution. It can be
assigned to the team who created the software or it may be assigned to some

specialized team. After assigning the bug to the software team, the status of the
bug is changed to 'assign'.
Test
The team to which the bug has been assigned works on the removal of bug. Once,
they are finished with fixing the bug, it is sent back to the testing team for a retest.
However, before sending the bug back to the testing team, its status is changed to
'test' in the report.
Deferred
If the development team changes the status of the bug to 'deferred', it means that
the bug will be fixed in the next releases of the software. There can be myriad
reason why the software team may not consider fixing the bug urgently. This
includes lack of time, low impact of the bug or negligible potential of the bug to
induce major changes in the normal functioning of the software.
Rejected
Although, the testing lead might have approved the bug stating it as a genuine one,
the software development team may not always agree. Ultimately, it is the
prerogative of the development team to decide if the bug is really genuine or not. If
they doubt the presence or impact of the bug, then they may change its status to
'rejected'.
Duplicate
If the development team finds that the same bug has been repeated twice or there
are two bugs which point to the same concept, then the status of one bug is
changed to 'duplicate'. In this case, fixing one bug automatically takes care of the
other bug.
Verified
If the software development team sends the fixed bug back for retesting, then the
bug undergoes rigorous testing procedure again. If at the end of the test, it is not
found then its status is changed to 'verified.'
Reopened
If the bug still exists, then its status is changed to 'reopened'. The bug then
traverses the entire of its life cycle once again.
Closed
If no occurrence of bug is reported and if the software functions normally, then the
bug is 'closed.' This is the final stage in which the bug has been fixed, tested and
approved.

Software Testing Life Cycle Phases


Software testing has its own life cycle that meets every stage of the SDLC. The software testing
life cycle diagram can help one visualize the various software testing life cycle phases. They are
1. Requirement Stage
2. Test Planning
3. Test Analysis
4. Test Design
5. Test Verification and Construction
6. Test Execution
7. Result Analysis
8. Bug Tracking
9. Reporting and Rework
10. Final Testing and Implementation
11. Post Implementation

Requirement Stage
This is the initial stage of the life cycle process in which the developers take part in analyzing the
requirements for designing a product. Testers can also involve themselves as they can think from
the users' point of view which the developers may not. Thus a panel of developers, testers and
users can be formed. Formal meetings of the panel can be held in order to document the
requirements discussed which can be further used as software requirements specifications or
SRS.
Test Planning
Test planning is predetermining a plan well in advance to reduce further risks. Without a good
plan, no work can lead to success be it software-related or routine work. A test plan document

plays an important role in achieving a process-oriented approach. Once the requirements of the
project are confirmed, a test plan is documented. The test plan structure is as follows:
1. Introduction: This describes the objective of the test plan.
2. Test Items The items that are referred to prepare this document will be listed here such as
SRS, project plan.
3. Features to be tested: This describes the coverage area of the test plan, ie. the list of
features that are to be tested that are based on the implicit and explicit requirements from
the customer.
4. Features not to be tested: The incorporated or comprised features that can be skipped
from the testing phase are listed here. Features that are out of scope of testing, like
incomplete modules or those on low severity eg. GUI features that don't hamper the
further process can be included in the list.
5. Approach: This is the test strategy that should be appropriate to the level of the plan. It
should be in acceptance with the higher and lower levels of the plan.
6. Item pass/fail criteria: Related to the show stopper issue. The criterion which is used has
to explain which test item has passed or failed.
7. Suspension criteria and resumption requirements: The suspension criterion specifies the
criterion that is to be used to suspend all or a portion of the testing activities, whereas
resumption criterion specifies when testing can resume with the suspended portion.
8. Test deliverable: This includes a list of documents, reports, charts that are required to be
presented to the stakeholders on a regular basis during testing and when testing is
completed.
9. Testing tasks: This stage is needed to avoid confusion whether the defects should be
reported for future function. This also helps users and testers to avoid incomplete
functions and prevent waste of resources.
10. Environmental needs: The special requirements of that test plan depending on the
environment in which that application has to be designed are listed here.
11. Responsibilities: This phase assigns responsibilities to the person who can be held
responsible in case of a risk.
12. Staffing and training needs: Training on the application/system and training on the testing
tools to be used needs to be given to the staff members who are responsible for the
application.

13. Risks and contingencies:This emphasizes on the probable risks and various events that
can occur and what can be done in such situation.
14. Approval: This decides who can approve the process as complete and allow the project to
proceed to the next level that depends on the level of the plan.
Test Analysis
Once the test plan documentation is done, the next stage is to analyze what types of software
testing should be carried out at the various stages of SDLC.
Test Design
Test design is done based on the requirements of the project documented in the SRS. This phase
decides whether manual or automated testing is to be done. In automation testing, different paths
for testing are to be identified first and writing of scripts has to be done if required. There
originates a need for an end to end checklist that covers all the features of the project.
Test Verification and Construction
In this phase test plans, the test design and automated script tests are completed. Stress and
performance testing plans are also completed at this stage. When the development team is done
with a unit of code, the testing team is required to help them in testing that unit and reporting of
the bug if found. Integration testing and bug reporting is done in this phase of the software
testing life cycle.
Test Execution
Planning and execution of various test cases is done in this phase. Once the unit testing is
completed, the functionality of the tests is done in this phase. At first, top level testing is done to
find out top level failures and bugs are reported immediately to the development team to get the
required workaround. Test reports have to be documented properly and the bugs have to be
reported to the development team.
Result Analysis
Once the bug is fixed by the development team, i.e after the successful execution of the test case,
the testing team has to retest it to compare the expected values with the actual values, and declare
the result as pass/fail.
Bug Tracking
This is one of the important stages as the Defect Profile Document (DPD) has to be updated for
letting the developers know about the defect. Defect Profile Document contains the following
1. Defect Id: Unique identification of the Defect.
2. Test Case Id: Test case identification for that defect.
3. Description: Detailed description of the bug.

4. Summary: This field contains some keyword information about the bug, which can help
in minimizing the number of records to be searched.
5. Defect Submitted By: Name of the tester who detected/reported the bug.
6. Date of Submission: Date at which the bug was detected and reported.
7. Build No.: Number of test runs required.
8. Version No.: The version information of the software application in which the bug was
detected and fixed.
9. Assigned To: Name of the developer who is supposed to fix the bug.
10. Severity: Degree of severity of the defect.
11. Priority: Priority of fixing the bug.
12. Status: This field displays current status of the bug.
The contents of a bug well explain all the above mentioned things.
Reporting and Rework
Testing is an iterative process. The bug once reported and as the development team fixes the bug,
it has to undergo the testing process again to assure that the bug found is resolved. Regression
testing has to be done. Once the Quality Analyst assures that the product is ready, the software is
released for production. Before release, the software has to undergo one more round of top level
testing. Thus testing is an ongoing process.
Final Testing and Implementation
This phase focuses on the remaining levels of testing, such as acceptance, load, stress,
performance and recovery testing. The application needs to be verified under specified
conditions with respect to the SRS. Various documents are updated and different matrices for
testing are completed at this stage of the software testing life cycle.
Post Implementation
Once the tests are evaluated, the recording of errors that occurred during various levels of the
software testing life cycle, is done. Creating plans for improvement and enhancement is an
ongoing process. This helps to prevent similar problems from occuring in the future projects. In
short, planning for improvement of the testing process for future applications is done in this
phase.

What is regression testing?


Regression testing is the testing of a particular component of the software or the
entire software after modifications have been made to it. The aim of regression
testing is to ensure new defects have not been introduced in the component or
software, especially in the areas where no changes have been made. In short,
regression testing is the testing to ensure nothing has changed, which should not
have changed due to changes made.
Stress Testing in IT Industry
Stress testing in IT industry (hardware as well as software sectors) means testing of
software/hardware for its effectiveness in giving consistent or satisfactory
performance under extreme and unfavorable conditions such as heavy network
traffic, heavy processes load, under or over clocking of underlying hardware,
working under maximum requests for resource utilization of the peripheral or in the
system etc.

What is a Review?
A review is an evaluation of a said product or project status to ascertain any
discrepancies from the actual planned results and to recommend improvements to
the said product. The common examples of reviews are informal review or peer
review, technical review, inspection, walkthrough, management review.

What is Software Testing?


Software Testing is a process of verifying and validating whether the program is performing
correctly with no bugs. It is the process of analyzing or operating software for the purpose of
finding bugs. It also helps to identify the defects / flaws / errors that may appear in the
application code, which needs to be fixed. Testing not only means fixing the bug in the code, but
also to check whether the program is behaving according to the given specifications and testing
strategies. There are various types of software testing strategies such as white box testing
strategy, black box testing strategy, grey box software testing strategy, etc.
Need of Software Testing Types
Types of Software Testing, depends upon different types of defects. For example:

Functional testing is done to detect functional defects in a system.

Performance Testing is performed to detect defects when the system does not perform
according to the specifications

Usability Testing to detect usability defects in the system.

Security Testing is done to detect bugs/defects in the security of the system.

The list goes on as we move on towards different layers of testing.


Types of Software Testing
Various software testing methodologies guide you through the consecutive software testing
types. Those who are new to this subject, here is some information on software testing - how to
go about for beginners. To determine the true functionality of the application being tested, test
cases are designed to help the developers. Test cases provide you with the guidelines for going
through the process of testing the software. Software testing includes two basic types of software
testing, viz. Manual Scripted Testing and Automated Testing.

Manual Scripted Testing: This is considered to be one of the oldest type of software
testing methods, in which test cases are designed and reviewed by the team, before
executing it.

Automated Testing: This software testing type applies automation in the testing, which
can be applied to various parts of a software process such as test case management,
executing test cases, defect management, reporting of the bugs/defects. The bug life cycle
helps the tester in deciding how to log a bug and also guides the developer to decide on
the priority of the bug depending upon the severity of logging it. Software bug testing or
software testing to log a bug, explains the contents of a bug that is to be fixed. This can
be done with the help of various bug tracking tools such as Bugzilla and defect tracking
management tools like the Test Director.

Other Software Testing Types


Software testing life cycle is the process that explains the flow of the tests that are to be carried
on each step of software testing of the product. The V- Model i.e Verification and Validation
Model is a perfect model which is used in the improvement of the software project. This model
contains software development life cycle on one side and software testing life cycle on the other
hand side. Checklists for software tester sets a baseline that guides him to carry on the day-today activities.
Black Box Testing
It explains the process of giving the input to the system and checking the output, without
considering how the system generates the output. It is also called as Behavioral Testing.
Functional Testing: In this type of testing, the software is tested for the functional requirements.
This checks whether the application is behaving according to the specification.
Performance Testing: This type of testing checks whether the system is performing properly,
according to the user's requirements. Performance testing depends upon the Load and Stress
Testing, that is internally or externally applied to the system.
1. Load Testing : In this type of performance testing, the system is raised beyond the limits
in order to check the performance of the system when higher loads are applied.

2. Stress Testing : In this type of performance testing, the system is tested beyond the
normal expectations or operational capacity
Usability Testing: This type of testing is also called as 'Testing for User Friendliness'. This
testing checks the ease of use of an application. Read more on introduction to usability testing.
Regression Testing: Regression testing is one of the most important types of testing, in which it
checks whether a small change in any component of the application does not affect the
unchanged components. Testing is done by re-executing the previous versions of the application.
Smoke Testing: Smoke testing is used to check the testability of the application. It is also called
'Build Verification Testing or Link Testing'. That means, it checks whether the application is
ready for further major testing and working, without dealing with the finer details.
Sanity Testing: Sanity testing checks for the behavior of the system. This type of software
testing is also called as Narrow Regression Testing.
Parallel Testing: Parallel testing is done by comparing results from two different systems like
old vs new or manual vs automated.
Recovery Testing: Recovery testing is very necessary to check how fast the system is able to
recover against any hardware failure, catastrophic problems or any type of system crash.
Installation Testing: This type of software testing identifies the ways in which installation
procedure leads to incorrect results.
Compatibility Testing: Compatibility Testing determines if an application under supported
configurations perform as expected, with various combinations of hardware and software
packages. Read more on compatibility testing.
Configuration Testing: This testing is done to test for compatibility issues. It determines
minimal and optimal configuration of hardware and software, and determines the effect of
adding or modifying resources such as memory, disk drives and CPU.
Compliance Testing: This type of testing checks whether the system was developed in
accordance with standards, procedures and guidelines.
Error-Handling Testing: This software testing type determines the ability of the system to
properly process erroneous transactions.
Manual-Support Testing: This type of software testing is an interface between people and
application system.
Inter-Systems Testing: This type of software testing method is an interface between two or more
application systems.

Exploratory Testing: Exploratory Testing is a type of software testing, which is similar to adhoc testing, and is performed to explore the software features. Read more on exploratory testing.
Volume Testing: This testing is done, when huge amount of data is processed through the
application.
Scenario Testing: This type of software testing provides a more realistic and meaningful
combination of functions, rather than artificial combinations that are obtained through domain or
combinatorial test design.
User Interface Testing: This type of testing is performed to check, how user-friendly the
application is. The user should be able to use the application, without any assistance by the
system personnel.
System Testing: System testing is the testing conducted on a complete, integrated system, to
evaluate the system's compliance with the specified requirements. This type of software testing
validates that the system meets its functional and non-functional requirements and is also
intended to test beyond the bounds defined in the software/hardware requirement specifications.
User Acceptance Testing: Acceptence Testing is performed to verify that the product is
acceptable to the customer and it's fulfilling the specified requirements of that customer. This
testing includes Alpha and Beta testing.
1. Alpha Testing: Alpha testing is performed at the developer's site by the customer in a
closed environment. This testing is done after system testing.
2. Beta Testing: This type of software testing is done at the customer's site by the customer
in the open environment. The presence of the developer, while performing these tests, is
not mandatory. This is considered to be the last step in the software development life
cycle as the product is almost ready.
White Box Testing
It is the process of giving the input to the system and checking, how the system processes the
input, to generate the output. It is mandatory for a tester to have the knowledge of the source
code.
Unit Testing: This type of testing is done at the developer's site to check whether a particular
piece/unit of code is working fine. Unit testing deals with testing the unit as a whole.
Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to
find out any possible defect in the code. Whereas, in dynamic analysis the code is executed and
analyzed for the output.
Statement Coverage: This type of testing assures that the code is executed in such a way that
every statement of the application is executed at least once.

Decision Coverage: This type of testing helps in making decision by executing the application,
at least once to judge whether it results in true or false.
Condition Coverage: In this type of software testing, each and every condition is executed by
making it true and false, in each of the ways at least once.
Path Coverage: Each and every path within the code is executed at least once to get a full path
coverage, which is one of the important parts of the white box testing.
Integration Testing: Integration testing is performed when various modules are integrated with
each other to form a sub-system or a system. This mostly focuses in the design and construction
of the software architecture. Integration testing is further classified into Bottom-Up Integration
and Top-Down Integration testing.
1. Bottom-Up Integration Testing: In this type of integration testing, the lowest level
components are tested first and then alleviate the testing of higher level components
using 'Drivers'.
2. Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests
the top level modules are tested and the branch of the module are tested step by step
using 'Stubs' until the related module comes to an end.
Security Testing: Testing that confirms, how well a system protects itself against unauthorized
internal or external, or willful damage of code, means security testing of the system. Security
testing assures that the program is accessed by the authorized personnel only. Read more on brief
introduction to security testing.
Mutation Testing: In this type of software testing, the application is tested for the code that was
modified after fixing a particular bug/defect.
Software testing methodologies and different software testing strategies help to get through this
software testing process. These various software testing methods show you the outputs, using the
above mentioned software testing types, and helps you check if the software satisfies the
requirement of the customer. Software testing is indeed a vast subject and one can make a
successful carrier in this field. You could go through some software testing interview questions,
to prepare yourself for some software testing tutorials.

Explain in short, sanity testing, adhoc testing and smoke testing.


Sanity testing is a basic test, which is conducted if all the components of the
software can be compiled with each other without any problem. It is to make sure
that there are no conflicting or multiple functions or global variable definitions have
been made by different developers. It can also be carried out by the developers
themselves.

Smoke testing on the other hand is a testing methodology used to cover all the
major functionality of the application without getting into the finer nuances of the
application. It is said to be the main functionality oriented test.
Ad hoc testing is different than smoke and sanity testing. This term is used for
software testing, which is performed without any sort of planning and/or
documentation. These tests are intended to run only once. However in case of a
defect found it can be carried out again. It is also said to be a part of exploratory
testing.
What are stubs and drivers in manual testing?
Both stubs and drivers are a part of incremental testing. There are two approaches,
which are used in incremental testing, namely bottom up and top down approach.
Drivers are used in bottom up testing. They are modules, which test the
components to be tested. The look of the drivers is similar to the future real
modules.
A skeletal or special purpose implementation of a component, which is used to
develop or test a component, that calls or is otherwise dependent on it. It is the
replacement for the called component.
Explain priority, severity in software testing.
Priority is the level of business importance, which is assigned to a defect found. On
the other hand, severity is the degree of impact, the defect can have on the
development or operation of the component or the system.
Explain the waterfall model in testing.
Waterfall model is a part of software development life cycle, as well as software
testing. It is one of the first models to be used for software testing.
Tell me about V model in manual testing.
V model is a framework, which describes the software development life cycle
activities right from requirements specification up to software maintenance phase.
Testing is integrated in each of the phases of the model. The phases of the model
start with user requirements and are followed by system requirements, global
design, detailed design, implementation and ends with system testing of the entire
system. Each phase of model has the respective testing activity integrated in it and
is carried out parallel to the development activities. The four test levels used by this
model include, component testing, integration testing, system testing and
acceptance testing.
Difference between bug, error and defect.
Bug and defect essentially mean the same. It is the flaw in a component or system,
which can cause the component or system to fail to perform its required function. If

a bug or defect is encountered during the execution phase of the software


development, it can cause the component or the system to fail. On the other hand,
an error is a human error, which gives rise to incorrect result. You may want to know
about, how to log a bug (defect), contents of a bug, bug life cycle, and bug and
statuses used during a bug life cycle, which help you in understanding the terms
bug and defect better.
What is compatibility testing?
Compatibility testing is a part of non-functional tests carried out on the software
component or the entire software to evaluate the compatibility of the application
with the computing environment. It can be with the servers, other software,
computer operating system, different web browsers or the hardware as well.
What is integration testing?
One of the software testing types, where tests are conducted to test interfaces
between components, interactions of the different parts of the system with
operating system, file system, hardware and between different software. It may be
carried out by the integrator of the system, but should ideally be carried out by a
specific integration tester or a test team.

Software Testing Methods


There are different types of testing methods or techniques as part of the software testing process.
I have enlisted a few of them below.

White box testing

Black box testing

Gray box testing

Unit testing

Integration testing

Regression testing

Usability testing

Performance testing

Scalability testing

Software stress testing

Recovery testing

Security testing

Conformance testing

Smoke testing

Compatibility testing

System testing

Alpha testing

Beta testing

The above software testing methods can be implemented in two ways - manually or by
automation. Manual software testing is done by human software testers who manually i.e.
physically check, test and report errors or bugs in the product or piece of code. In case of
automated software testing, the same process is performed by a computer by means of an
automated testing software such as WinRunner, LoadRunner, Test Director, etc.
Software Testing Methodologies
These are some commonly used software testing methodologies:

Waterfall model

V model

Spiral model

RUP

Agile model

RAD

Let us have a look at each one of these methodologies one by one.


Waterfall Model
The waterfall model adopts a 'top down' approach regardless of whether it is being used for
software development or testing. The basic steps involved in this software testing methodology
are:

1. Requirement analysis
2. Test case design
3. Test case implementation
4. Testing, debugging and validating the code or product
5. Deployment and maintenance
In this methodology, you move on to the next step only after you have completed the present
step. There is no scope for jumping backward or forward or performing two steps
simultaneously. Also, this model follows a non-iterative approach. The main benefit of this
methodology is its simplistic, systematic and orthodox approach. However, it has many
shortcomings since bugs and errors in the code are not discovered until and unless the testing
stage is reached. This can often lead to wastage of time, money and valuable resources.
V Model
The V model gets its name from the fact that the graphical representation of the different test
process activities involved in this methodology resembles the letter 'V'. The basic steps involved
in this methodology are more or less the same as those in the waterfall model. However, this
model follows both a 'top-down' as well as a 'bottom-up' approach (you can visualize them
forming the letter 'V'). The benefit of this methodology is that in this case, both the development
and testing activities go hand-in-hand. For example, as the development team goes about its
requirement analysis activities, the testing team simultaneously begins with its acceptance testing
activities. By following this approach, time delays are minimized and optimum utilization of
resources is assured.
Spiral Model
As the name implies, the spiral model follows an approach in which there are a number of cycles
(or spirals) of all the sequential steps of the waterfall model. Once the initial cycle is completed,
a thorough analysis and review of the achieved product or output is performed. If it is not as per
the specified requirements or expected standards, a second cycle follows, and so on. This
methodology follows an iterative approach and is generally suited for very large projects having
complex and constantly changing requirements.
Rational Unified Process (RUP)
The RUP methodology is also similar to the spiral model in the sense that the entire testing
procedure is broken up into multiple cycles or processes. Each cycle consists of four phases
namely; inception, elaboration, construction and transition. At the end of each cycle, the product
or the output is reviewed and a further cycle (made up of the same four phases) follows if
necessary. Today, you will find certain organizations and companies adopting a slightly modified

version of the RUP, which goes by the name of Enterprise Unified Process (EUP).
Agile Model
This methodology follows neither a purely sequential approach nor does it follow a purely
iterative approach. It is a selective mix of both of these approaches in addition to quite a few new
developmental methods. Fast and incremental development is one of the key principles of this
methodology. The focus is on obtaining quick, practical and visible outputs and results, rather
than merely following theoretical processes. Continuous customer interaction and participation is
an integral part of the entire development process.
Rapid Application Development (RAD)
The name says it all. In this case, the methodology adopts a rapid development approach by
using the principle of component-based construction. After understanding the various
requirements, a rapid prototype is prepared and is then compared with the expected set of output
conditions and standards. Necessary changes and modifications are made after joint discussions
with the customer or the development team (in the context of software testing). Though this
approach does have its share of advantages, it can be unsuitable if the project is large, complex
and happens to be of an extremely dynamic nature, wherein the requirements are constantly
changing. Here are some more advantages of rapid application development.
Explain performance testing.
It is one of the non-functional type of software testing. Performance of a software is
the degree to which a system or a component of system accomplish the designated
functions within given constraints regarding processing time and throughput rate.
Therefore, performance testing is the process to test to determine the performance
of a software.
Explain the testcase life cycle.
On an average a test case goes through the following phases. The first phase of the
testcase life cycle is identifying the test scenarios either from the specifications or
from the use cases designed to develop the system. Once the scenarios have been
identified, the test cases apt for the scenarios have to be developed. Then the test
cases are reviewed and the approval for those test cases have to be taken from the
concerned authority. After the test cases have been approved, they are executed.
When the execution of the test cases start, the results of the tests have to be
recorded. The test cases which pass are marked accordingly. If the test cases fail,
defects have to be raised. When the defects are fixed the failed test case has to be
executed again.
What is Equivalence Partitioning?
In equivalence partitioning, the tester recognizes various equivalent classes for
segregating, which are also the test cases. In this method, input possibilities are

sorted into classes which are known as equivalence classes. But each of these
classes cause the same processing and produce the same output. A class is a bunch
of inputs that are likely to be processed in the same manner by the software.
Equivalence partitioning can also be defined as a testing technique to minimize the
occurrences of permutations and combinations of input data. It can be considered
that the utility of the program will remain same for any value of data from the same
class. That means, it is enough to choose one test case from every segment to
inspect the behavior or utility of the program. Even if you test for all the test cases
of a partition, hardly ever a new fault will be revealed in the program. Thus, the
values in one partition are can be safely taken to be equivalent. This reduces the
effort of the tester by minimizing the number of test cases to be tested. Applying
this technique also assists you in finding the "dirty" test cases.
Black Box Vs White Box
Black box testing is a testing is a way in which a software program is tested at the
outer interface, without considering its internal architecture. Equivalence
partitioning is often compared with black box testing. However, it has similarities
with white box testing too. Some softwares may give different results for different
ranges of the input values which will not be noticed by black box testing as it deals
only with the outer interface. In white box testing, all the possible processes will be
examined. To ensure this, additional segregations are considered in equivalence
partitioning, which is not done in black box testing.

Explain statement coverage.


It is a structure based or white box technique. Test coverage measures in a specific way the
amount of testing performed by a set of tests. One of the test coverage type is statement
coverage. It is the percentage of executable statements which have been exercise by a particular
test suite. The formula which is used for statement coverage is:

Statement Coverage =

Number of statements
exercised
* 100%
Total number of statements

What is acceptance testing


Acceptance testing (also known as user acceptance testing) is a type of testing
carried out in order to verify if the product is developed as per the standards and
specified criteria and meets all the requirements specified by customer. This type of
testing is generally carried out by a user/customer where the product is developed
externally by another party.
What is Compatibility Testing

Compatibility testing is a type of testing used to ensure compatibility of the


system/application/website built with various other objects such as other web
browsers, hardware platforms, users (in case if its very specific type of requirement,
such as a user who speaks and can read only a particular language), operating
systems etc. This type of testing helps find out how well a system performs in a
particular environment that includes hardware, network, operating system and
other software etc.
What is meant by functional defects and usability defects in general? Give
appropriate example.
We will take the example of Login window to understand functionality and usability
defects. A functionality defect is when a user gives a valid user name but invalid
password and the user clicks on login button. If the application accepts the user
name and password, and displays the main window, where an error should have
been displayed. On the other hand a usability defect is when the user gives a valid
user name, but invalid password and clicks on login button. The application throws
up an error message saying "Please enter valid user name" when the error message
should have been "Please enter valid Password."
Usability Testing:
As the term suggest, usability means how better something can be used over the
purpose it has been created for. Usability testing means a way to measure how
people (intended/end user) find it (easy, moderate or hard) to interact with and use
the system keeping its purpose in mind. It is a standard statement that "Usability
testing measures the usability of the system".
What is an Exploratory Testing?
Bachs Definition: Any testing to the extent that the tester actively controls the
design of the tests as those tests are performed and uses information gained while
testing to design new and better tests.
Which simply can be put as: A type of testing where we explore software, write and
execute the test scripts simultaneously.
Exploratory testing is a type of testing where tester does not have specifically
planned test cases, but he/she does the testing more with a point-of-view to explore
the software features and tries to break it in order to find out unknown bugs.
A tester who does exploratory testing, does it only with an idea to more and more
understand the software and appreciate its features. During this process, he/she
also tries to think of all possible scenarios where the software may fail and a bug
can be revealed.
Security Testing

Security Testing of any developed system (or a system under development) is all
about finding out all the potential loopholes and weaknesses of the system, which
might result into loss/theft of highly sensitive information or destruction of the
system by an intruder/outsider. Security Testing helps in finding out all the possible
vulnerabilities of the system and help developers in fixing those problems.
What is a White Box Testing Strategy?
White box testing strategy deals with the internal logic and structure of the code.
White box testing is also called as glass, structural, open box or clear box testing.
The tests written based on the white box testing strategy incorporate coverage of
the code written, branches, paths, statements and internal logic of the code etc.
In order to implement white box testing, the tester has to deal with the code and
hence is needed to possess knowledge of coding and logic i.e. internal working of
the code. White box test also needs the tester to look into the code and find out
which unit/statement/chunk of the code is malfunctioning.
Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very
easy to find out which type of input/data can help in testing the application
effectively.
ii) The other advantage of white box testing is that it helps in optimizing the code
iii) It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing are:


i) As knowledge of code and internal structure is a prerequisite, a skilled tester is
needed to carry out this type of testing, which increases the cost.
ii) And it is nearly impossible to look into every bit of code to find out hidden errors,
which may create problems, resulting in failure of the application.
What is the difference between volume testing and load testing?
Volume testing checks if the system can actually cope up with the large amount of
data. For example, a number of fields in a particular record or numerous records in a
file, etc. On the other hand, load testing is measuring the behavior of a component
or a system with increased load. The increase in load can be in terms of number of
parallel users and/or parallel transactions. This helps to determine the amount of
load, which can be handled by the component or the software system.
What is pilot testing?
It is a test of a component of a software system or the entire system under the real

time operating conditions. The real time environment helps to find the defects in the
system and prevent costly bugs been detected later on. Normally a group of users
use the system before its complete deployment and give their feedback about the
system.
What is exact difference between debugging & testing?
When a test is run and a defect has been identified. It is the duty of the developer
to first locate the defect in the code and then fix it. This process is known as
debugging. In other words, debugging is the process of finding, analyzing and
removing the causes of failures in the software. On the other hand, testing consists
of both static and dynamic testing life cycle activities. It helps to determine that the
software does satisfy specified requirements and it is fit for purpose.
What is a Black Box Testing Strategy?
Black Box Testing is not a type of testing; it instead is a testing strategy, which does
not need any knowledge of internal design or code etc. As the name "black box"
suggests, no knowledge of internal logic or code structure is required. The types of
testing under this strategy are totally based/focused on the testing for requirements
and functionality of the work product/software application. Black box testing is
sometimes also called as "Opaque Testing", "Functional/Behavioral Testing" and
"Closed Box Testing".
What is Verification?
The standard definition of Verification goes like this: "Are we building the product
RIGHT?" i.e. Verification is a process that makes it sure that the software product is
developed the right way. The software should confirm to its predefined
specifications, as the product development goes through different stages, an
analysis is done to ensure that all required specifications are met.

What is Validation?
Validation is a process of finding out if the product being built is right?
i.e. whatever the software product is being developed, it should do what the user
expects it to do. The software product should functionally do what it is supposed to,
it should satisfy all the functional requirements set by the user. Validation is done
during or at the end of the development process in order to determine whether the
product satisfies specified requirements.
Software Validation Testing
While verification is a quality control process, quality assurance process carried out
before the software is ready for release is known as validation testing. The
validation testing goals is to validate and be confident about the software product or

system, that it fulfills the requirements given by the customer. The acceptance of
the software from the end customer is also a part of validation testing.
Validation testing answers the question, "Are you building the right software
system". Another question, which the entire process of validation testing in software
engineering answers is, "Is the deliverable fit for purpose". In other words, does the
software system provide the right solution to the problem. Therefore, often the
testing activities are introduced early in the software development life cycle. The
two major areas, when validation testing should take place are in the early stages of
software development and towards the end, when the product is ready for release.
In other words, it is acceptance testing which is a part of validation testing.
Waterfall Model Life-cycle
A lot of research and development is enforced during the various growth stages of
any particular software. The idea of generating a standardized model of testing (like
the waterfall model in testing) is to ensure that a software engineer follows the
correct sequence of process development, and does not get too far ahead too soon.
Each line of the program needs to be checked and double checked, and each stage
of the waterfall model is required to follow a standard protocol. The various waterfall
model phases are as follows.

The waterfall model diagram shown above illustrates the various stages of this
process.
Requirements Gathering
The first and most obvious step in software development is the gathering of all
requirements of the customer. The primary purpose of the final program is to serve
the user, so therefore all his needs and requirements need to be known in detail,
before the development process actually begins. The purpose of the model and a
basic specifications and requirements chart is made after careful consultation with
the user, and this is incorporated into the development process. Waterfall model in
testing begins primarily with the gathering of all pertinent and necessary data from
the customer.

Requirements Analysis
Next up, these requirements are studied and analyzed closely, and the developer
takes a decision regarding which platform, which computer language, and what kind
of databases are necessary for the development process. A feasibility study is then
carried out to ensure that all resources are available and the actual programming of
the software is possible. A projected blueprint of sorts is created of the software.
Designing and Coding
This is where the real work begins, and the algorithms and the flowcharts of the
software are devised. Based on the data collected and the feasibility study carried
out, the actual coding of the program commences. Without the information
gathered in the previous two stages, the design of the program would be
impossible. This is the most important stage of the model, and the use of the
waterfall model in testing would be impossible without something to actually test. It
goes without saying that the final design has to meet all the necessary
requirements of the customer.
Testing
Now comes the litmus test of the code developed. This stage defines the actual
transition of the program from a mere hypothesis to a real usable software. Without
testing the functionality of the code, all the possible bugs cannot be detected.
Moreover, use of waterfall model in testing also ensures that all the requirements of
the customer are satisfactorily met, and there are no loose ends anywhere in the
code developed. If any flaws or bugs are detected, the software is reverted to the
designing stage and all the deficiencies are fixed.
The designing process is divided into smaller parts known as units, and unit testing
needs to be carried out for each of these divisions individually. Once the units are
declared to be flaw free, they are integrated into the final system and then this
system is tested to ensure proper integration and compatibility between the various
units. Waterfall model in testing can only be done by dividing up the coded program
into various manageable parts. Thus, the importance of the testing phase in
waterfall model is universally known and undoubted.
Final Acceptance
Once the design has been tried and tested by the testing team, the customers are
given a demo version of the final program. Now they must use the program and
indicate whether they are satisfied with the product or not. If they accept that the
software is satisfactory and as per their demands and requirements, the process is
completed. On the other hand, if he is dissatisfied with certain aspects of the
software, or feels that an integral component is missing, the design team proceeds
to solve this problem. The benefits of dividing the work into these various stages is
that everyone knows what they are doing, and are specifically trained to carry out

their responsibility.
Waterfall model in testing ensures that a high degree of professionalism is met
within the development process, and that all the parties involved in this
development process are specialists in their respective fields.
Advantages of Waterfall Model in Testing
The primary advantage is that it is a linear model, and follows a proper sequence
and order. This is a crucial factor in determining the model's effectiveness and
suitability. Also, since the process is following a linear sequence, and documentation
is produced at every stage, it is easy to track down mistakes, deficiencies and any
other problems that may arise. The cost of resources at each stage get minimized
due to the linear sequencing as well.
Disadvantages of Waterfall Model in Testing
As is the case with all other models, if the customer is ambiguous about his needs,
the design process can go horribly wrong. This factor is further highlighted by the
fact that if some mistake is made in a certain stage and is not detected or tracked,
all the subsequent steps will go wrong. Therefore the need for testing is very
intense. Customers often have a complaint that if they could get a sample of the
software in the early stages, they could find out whether it is suitable or not. Since
they do not receive the program till it is almost completed, it becomes a little more
complicated for them to offer feedback. Thus, a situation of complete trust from the
client is essential.
Beta Testing
Beta testing, is testing the prototype of a software before its release as a product.
What if somebody told you, that you will be given an opportunity to play the latest
video games, which no one has played before and get paid for it too? You will say
'come on, this sounds too good to be true!' It is too good and it is indeed true! It is I
guess the dream job for any video gaming freak! If you absolutely love gaming and
can truly appreciate the finer points of what makes a good game, then video game
beta testing is the dream career for you!

What is boundary value analysis?


A boundary value is an input or an output value, which resides on the edge of an equivalence
partition. It can also be the smallest incremental distance on either side of an edge, like the
minimum or a maximum value of an edge. Boundary value analysis is a black box testing
technique, where the tests are based on the boundary values.
What is system testing?
System testing is testing carried out of an integrated system to verify, that the system meets the
specified requirements. It is concerned with the behavior of the whole system, according to the
scope defined. More often than not system testing is the final test carried out by the development

team, in order to verify that the system developed does meet the specifications and also identify
defects which may be present.
What is the difference between retest and regression testing?
Restesting, also known as confirmation testing is testing which runs the test cases that failed the
last time, when they were run in order to verify the success of corrective actions taken on the
defect found. On the other hand, regression testing is testing of a previously tested program
program after the modifications to make sure that no new defects have been introduced. In other
words, it helps to uncover defects in the unchanged areas of the software.
What is a test suite?
A test suite is a set of several test cases designed for a component of a software or system under
test, where the post condition of one test case is normally used as the precondition for the next
test.
These are some of the software testing interview questions and answers for freshers and the
experienced. This is not an exhaustive list, but I have tried to include as many software testing
interview questions and answers, as I could in this article. I hope the article proves to be of help,
when you are preparing for an interview. Heres wishing you luck with the interviews and I hope
you crack the interview as well.

You might also like