You are on page 1of 64

Software Testing Techniques Part 1

Because of the fallibility of its human designers and its own abstract, complex nature,
software development must be accompanied by quality assurance activities. It is not
unusual for developers to spend 40% of the total project time on testing. For life-critical
software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as
all other activities combined. The destructive nature of testing requires that the developer
discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals

Testing objectives include


1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered
error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of


time and with a minimum amount of effort. A secondary benefit of testing is that it
demonstrates that the software appears to be working as stated in the specifications. The
data collected through testing can also provide an indication of the software’s reliability
and quality. But, testing cannot show the absence of defect — it can only show that
software defects are present.

White Box Testing

White box testing is a test case design method that uses the control structure of the
procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed. General processing tends to be well understood while
special case processing tends to be prone to errors.

We often believe that a logical path is not likely to be executed when it may be executed
on a regular basis. Our unconscious assumptions about control flow and data lead to
design errors that can only be detected by path testing.

Typographical errors are random.

Basis Path Testing

1
This method enables the designer to derive a logical complexity measure of a procedural
design and use it as a guide for defining a basis set of execution paths. Test cases that
exercise the basis set are guaranteed to execute every statement in the program at least
once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the
derivation of the basis set. Each flow graph node represents one or more procedural
statements. The edges between nodes represent flow of control. An edge must terminate
at a node, even if the node does not represent any useful procedural statements. A region
in a flow graph is an area bounded by edges and nodes. Each node that contains a
condition is called a predicate node. Cyclomatic complexity is a metric that provides a
quantitative measure of the logical complexity of a program. It defines the number of
independent paths in the basis set and thus provides an upper bound for the number of
tests that must be performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of
processing statements (must move along at least one new edge in the path). The basis set
is not unique. Any number of different basis sets can be derived for a given procedural
design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.


2. Determine the cyclomatic complexity of this flow graph.
o Even without a flow graph, V(G) can be determined by counting the number of
conditional statements in the code.
3. Determine a basis set of linearly independent paths.
o Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
o Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A
software tool to do this can be developed using a data structure called a graph matrix. A
graph matrix is a square matrix whose size is equivalent to the number of nodes in the
flow graph. Each row and column correspond to a particular node and the matrix
corresponds to the connections (edges) between nodes. By adding a link weight to each
matrix entry, more information about the control flow can be captured. In its simplest

2
form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link
weights can be represented:
• the probability that an edge will be executed,
• the processing time expended during link traversal,
• the memory required during link traversal, or
• the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis
necessary to produce the basis set.

Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four
different classes of loops can be defined:
1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops

The following tests should be applied to simple loops where n is the maximum number of
allowable passes through the loop:
1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this
would result in a geometrically increasing number of test cases. One approach for nested
loops:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at
minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the
others. If they are not independent (e.g. the loop counter for one is the loop counter for
the other), then the nested approach can be used.

3
Unstructured Loops

This type of loop should be redesigned not tested!!!


Other White Box Techniques
Other white box testing techniques include:
1. Condition testing
o exercises the logical conditions in a program.
2. Data flow testing
o selects test paths according to the locations of definitions and uses of variables in the
program.

Black Box Testing

Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the
functional requirements of a system. It is not an alternative to white box testing. This type
of testing attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function’s validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box
testing tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test
cases can be derived. Equivalence partitioning strives to define a test case that uncovers
classes of errors and thereby reduces the number of test cases needed. It is based on an
evaluation of equivalence classes for an input condition. An equivalence class represents
a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes

4
are defined.
2. If an input condition requires a specific value, then one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are
defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just
above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to
exercise the minimum and maximum numbers and values just above and below these
limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical


conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier
is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

Manual Testing Interview Questions-1


What makes a good test engineer?

A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy
are useful in maintaining a cooperative relationship with developers, and an ability to
communicate with both technical (developers) and non-technical (customers,
management) people is useful. Previous software development experience can be helpful
as it provides a deeper understanding of the software development process, gives the
tester an appreciation for the developers’ point of view, and reduce the learning curve in

5
automated test tool programming. Judgement skills are needed to assess high-risk areas
of an application on which to focus testing efforts when time is limited.

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they must
be able to understand the entire software development process and how it can fit into the
business approach and goals of the organization. Communication skills and the ability to
understand various sides of issues are important. In organizations in the early stages of
implementing QA processes, patience and diplomacy are especially needed. An ability to
find problems as well as to see ‘what’s missing’ is important for inspections and reviews.

What makes a good QA or Test manager?

A good QA, test, or QA/Test(combined) manager should:


• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite
• what is a somewhat ‘negative’ process (e.g., looking for or preventing problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say ‘no’ to other managers when quality is
insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers, managers,
and customers.
• be able to run meetings and keep them focused

What’s the role of documentation in QA?

Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices
should be documented such that they are repeatable. Specifications, designs, business
rules, inspection reports, configurations, code changes, test plans, test cases, bug reports,
user manuals, etc. should all be documented. There should ideally be a system for easily
finding and obtaining documents and determining what documentation will have a
particular piece of information. Change management for documentation should be used if
possible.

What’s the big deal about ‘requirements’?

One of the most reliable methods of insuring problems, or failure, in a complex software
project is to have poorly documented requirements specifications. Requirements are the
details describing an application’s externally-perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and
testable. A non-testable requirement would be, for example, ‘user-friendly’ (too
subjective). A testable requirement would be something like ‘the user must enter their

6
previously-assigned password to access the application’. Determining and organizing
requirements details in a useful and efficient way can be a difficult effort; different
methods are available depending on the particular project. Many books are available that
describe various approaches to this task. (See the Bookstore section’s ‘Software
Requirements Engineering’ category for books on Software Requirements.)

Care should be taken to involve ALL of a project’s significant ‘customers’ in the


requirements process. ‘Customers’ could be in-house personnel or out, and could include
end-users, customer acceptance testers, customer contract officers, customer
management, future software maintenance engineers, salespeople, etc. Anyone who could
later derail the project if their expectations aren’t met should be included if possible.

Organizations vary considerably in their handling of requirements specifications. Ideally,


the requirements are spelled out in a document with statements such as ‘The product
shall…..’. ‘Design’ specifications should not be confused with ‘requirements’; design
specifications should be traceable back to the requirements.

In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests.
Without such documentation, there will be no clear-cut way to determine if a software
application is performing correctly.
‘Agile’ methods such as XP use methods requiring close interaction and cooperation
between programmers and customers/end-users to iteratively develop requirements. The
programmer uses ‘Test first’ development to first create automated unit testing code,
which essentially embodies the requirements.

What steps are needed to develop and run software tests?

The following are some of the steps to consider:

• Obtain requirements, functional design, and internal design specifications and other
necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting requirements,
required standards and processes (such as release processes, change processes, etc.)
• Identify application’s higher-risk aspects, set priorities, and determine scope and
limitations of tests
• Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc.
• Determine test environment requirements (hardware, software, communications, etc.)
• Determine testware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements

7
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through life
cycle

What’s a ‘test plan’?

A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way
to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the ‘why’ and
‘how’ of product validation. It should be thorough enough to be useful but not so
thorough that no one outside the test group will read it. The following are some of the
items that might be included in a test plan, depending on the particular project:

• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test
plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality,
process, system, module, etc. as applicable

8
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production systems
and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen
capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to
help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables,
contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
(See the Bookstore section’s ‘Software Testing’ and ‘Software QA’ categories for useful
books with more information.)

What’s a ‘test case’?

• A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking through
the operation of the application. For this reason, it’s useful to prepare test cases early in
the development cycle if possible.

What should be done after a bug is found?

9
The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn’t create problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these processes. A variety
of commercial problem-tracking/management software tools are available (see the ‘Tools’
section for web resources with listings of such tools). The following are items to consider
in the tracking process:

• Complete information such that developers can understand the bug, get an idea of it’s
severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., ‘Released for Retest’, ‘New’, etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the
developer doesn’t have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be
helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or ‘critical’-to-’low’ is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at
various stages. For instance, testers need to know when retesting is needed, developers
need to know when bugs are found and how to get the needed information, and
reporting/summary capabilities are needed for managers.

What is ‘configuration management’?

1
Configuration management covers the processes used to control, coordinate, and track:
code, requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
(See the ‘Tools’ section for web resources with listings of configuration management
tools. Also see the Bookstore section’s ‘Configuration Management’ category for useful
books with more information.)

What if the software is so buggy it can’t really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting
whatever bugs or blocking-type problems initially show up, with the focus being on
critical bugs. Since this type of problem can severely affect schedules, and indicates
deeper problems in the software development process (such as insufficient unit testing or
insufficient integration testing, poor design, improper build or release procedures, etc.)
managers should be notified, and provided with some documentation as evidence of the
problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex,
and run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends

What if there isn’t enough time for thorough testing?

Use risk analysis to determine where testing should be focused.


Since it’s rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk
analysis is appropriate to most software development projects. This requires judgement
skills, common sense, and experience. (If warranted, formal methods are also available.)

Considerations can include:

• Which functionality is most important to the project’s intended purpose?


• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?

11
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn’t big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive
testing is still not justified, risk analysis is again needed and the same considerations as
described previously in ‘What if there isn’t enough time for thorough testing?’ apply. The
tester might then do ad hoc testing, or write up a limited test plan based on the risk
analysis.

What can be done if requirements are changing continuously?

A common problem and a major headache.


• Work with the project’s stakeholders early on to understand how requirements might
change so that alternate test plans and strategies can be worked out in advance, if
possible.
• It’s helpful if the application’s initial design allows for some adaptability so that later
changes do not require redoing the application from scratch.
• If the code is well-commented and well-documented this makes changes easier for the
developers.
• Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
• The project’s initial schedule should allow for some extra time commensurate with the
possibility of changes.
• Try to move new requirements to a ‘Phase 2′ version of an application, while using the
original requirements for the ‘Phase 1′ version.
• Negotiate to allow only easily-implemented new requirements into the project, while
moving more difficult new requirements into future versions of the application.
• Be sure that customers and management understand the scheduling impacts, inherent
risks, and costs of significant requirements changes. Then let management or the
customers (not the developers or testers) decide if the changes are warranted - after all,
that’s their job.
• Balance the effort put into setting up automated testing with the expected effort required
to re-do them to deal with changes.
• Try to design some flexibility into automated test scripts.
• Focus initial automated testing on application aspects that are most likely to remain
unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression testing

1
needs.
• Design some flexibility into test cases (this is not easily done; the best bet might be to
minimize the detail in the test cases, or set up only higher-level generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).

What if the application has functionality that wasn’t in the requirements?

It may take serious effort to determine if an application has significant unexpected or


hidden functionality, and it would indicate deeper problems in the software development
process. If the functionality isn’t necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into
account by the designer or the customer. If not removed, design information will be
needed to determine added testing needs or regression testing needs. Management should
be made aware of any significant added risks as a result of the unexpected functionality.
If the functionality only effects areas such as minor improvements in the user interface,
for example, it may not be a significant risk.

How can Software QA processes be implemented without stifling productivity?

By implementing QA processes slowly over time, using consensus to reach agreement on


processes, and adjusting and experimenting as an organization grows and matures,
productivity will be improved instead of stifled. Problem prevention will lessen the need
for problem detection, panics and burn-out will decrease, and there will be improved
focus and less wasted effort. At the same time, attempts should be made to keep processes
simple and efficient, minimize paperwork, promote computer-based processes and
automated tracking and reporting, minimize time required in meetings, and promote
training as part of the QA process. However, no one - especially talented technical types -
likes rules or bureacracy, and in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development will be needed, but less
time will be required for late-night bug-fixing and calming of irate customers.

What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas.
There is no easy solution in this situation, other than:
• Hire good people
• Management should ‘ruthlessly prioritize’ quality issues and maintain focus on the
customer
• Everyone in the organization should be clear on what ‘quality’ means to the customer

How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among
clients, data communications, hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the focus should be on integration and

1
system testing. Additionally, load/stress/performance testing may be useful in
determining client/server application limitations and capabilities. There are commercial
tools to assist with such testing. (See the ‘Tools’ section for web resources with listings
that include these kinds of test tools.)

How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and ‘browser’
clients. Consideration should be given to the interactions between html pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web pages (such
as applets, javascript, plug-in applications), and applications that run on the server side
(such as cgi scripts, database interfaces, logging applications, dynamic page generators,
asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions
of each, small but sometimes significant differences between them, variations in
connection speeds, rapidly changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort. Other
considerations might include:

• What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time,
database query response times). What kinds of tools will be needed for performance
testing (such as web load testing tools, other tools already in house that can be adapted,
web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and
what is it expected to do? How can it be tested?
• How reliable are the site’s Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site’s content, and what
are the requirements for maintaining, tracking, and controlling page content, graphics,
links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be
allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet ‘traffic congestion’ problems to be
accounted for in testing?

1
• How extensive or customized are the server logging and reporting requirements; are
they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup
‘comp.security.announce’ and links concerning web site security in the ‘Other Resources’
section.
Some usability guidelines to consider - these are subjective and may or may not apply to
a given situation (Note: more information on usability testing issues can be found in
articles about web site usability in the ‘Other Resources’ section):
• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If
larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so that it’s
clear to the user that they’re still within a site.
• Pages should be as browser-independent as possible, or pages should be provided or
generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end pages.
• The page owner, revision date, and a link to a contact person or organization should be
included on each page.
Many new web site test tools have appeared in the recent years and more than 280 of
them are listed in the ‘Web Test Tools’ section.

Manual Testing Interview Questions-2


How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box
testing (where an understanding of the internal design of the application is unnecessary),
white-box testing can be oriented to the application’s objects. If the application was well-
designed this can simplify test design.

What is Extreme Programming and what’s it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-
prone projects with unstable requirements. It was created by Kent Beck who described
the approach in his book ‘Extreme Programming Explained’ (See the Softwareqatest.com
Books page.). Testing (’extreme testing’) is a core aspect of Extreme Programming.
Programmers are expected to write unit and functional test code first - before the
application is developed. Test code is under source control along with the rest of the code.
Customers are expected to be an integral part of the project team and to help develope
scenarios for acceptance/black box testing. Acceptance tests are preferably automated,
and are modified and rerun for each of the frequent development iterations. QA and test
personnel are also required to be an integral part of the project team. Detailed
requirements documentation is not used, and frequent re-scheduling, re-estimating, and

1
re-prioritizing is expected. For more info see the XP-related listings in the
Softwareqatest.com ‘Other Resources’ section.

What is ‘Software Quality Assurance’?

Software QA involves the entire software development PROCESS - monitoring and


improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
‘prevention’. (See the Bookstore section’s ‘Software QA’ category for a list of useful
books on Software Quality Assurance.)

What is ‘Software Testing’?

Testing involves operation of a system or application under controlled conditions and


evaluating the results (eg, ‘if the user is in interface A of the application while using
hardware B, and does C, then D should happen’). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to
make things go wrong to determine if things happen when they shouldn’t or things don’t
happen when they should. It is oriented to ‘detection’. (See the Bookstore section’s
‘Software Testing’ category for a list of useful books on Software Testing.)
• Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they’re the combined responsibility of one group or individual. Also common
are project teams that include a mix of testers and developers who work closely together,
with overall QA processes monitored by project managers. It will depend on what best
fits an organization’s size and business structure.

What are some recent major computer system failures caused by software bugs?

• A major U.S. retailer was reportedly hit with a large government fine in October of 2003
due to web site errors that enabled customers to view one anothers’ online orders.
• News stories in the fall of 2003 stated that a manufacturing company recalled all their
transportation products in order to fix a software problem causing instability in certain
circumstances. The company found and reported the bug itself and initiated the recall
procedure in which a software upgrade fixed the problems.
• In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage
company could proceed; the lawsuit reportedly involved claims that the company was not
fixing system problems that sometimes resulted in failed stock trades, based on the
experiences of 4 plaintiffs during an 8-month period. A previous lower court’s ruling that
“…six miscues out of more than 400 trades does not indicate negligence.” was
invalidated.
• In April of 2003 it was announced that the largest student loan company in the U.S.
made a software error in calculating the monthly payments on 800,000 loans. Although
borrowers were to be notified of an increase in their required payments, the company will
still reportedly lose $8 million in interest. The error was uncovered when borrowers
began reporting inconsistencies in their bills.
• News reports in February of 2003 revealed that the U.S. Treasury Department mailed

1
50,000 Social Security checks without any beneficiary names. A spokesperson indicated
that the missing names were due to an error in a software change. Replacement checks
were subsequently mailed out with the problem corrected, and recipients were then able
to cash their Social Security checks.
• In March of 2002 it was reported that software bugs in Britain’s national tax system
resulted in more than 100,000 erroneous tax overcharges. The problem was partly
attibuted to the difficulty of testing the integration of multiple systems.
• A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-
shelf software that had long been used in systems for tracking certain U.S. nuclear
materials. The same software had been recently donated to another country to be used in
tracking their own nuclear materials, and it was not until scientists in that country
discovered the problem, and shared the information, that U.S. officials became aware of
the problems.
• According to newspaper stories in mid-2001, a major systems development contractor
was fired and sued over problems with a large retirement plan management system.
According to the reports, the client claimed that system deliveries were late, the software
had excessive defects, and it caused other systems to crash.
• In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would
not run due to their inability to recognize the date ‘31/12/2000′; the trains were started by
altering the control system’s date settings.
• News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn’t work.
• In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district’s CIO was fired. The school district decided to reinstate it’s original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors.
• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined that
spacecraft software used certain data in English units that should have been in metric
units. Among other tasks, the orbiter was to serve as a communications relay for the Mars
Polar Lander mission, which failed for unknown reasons in December 1999. Several
investigating panels were convened to determine the process failures that allowed the
error to go undetected.
• Bugs in software supporting a large commercial high-speed data network affected
70,000 business customers over a period of 8 days in August of 1999. Among those
affected was the electronic trading system of the largest U.S. futures exchange, which
was shut down for most of a week as a result of the outages.
• In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite
launch, the costliest unmanned accident in the history of Cape Canaveral launches. The
failure was the latest in a string of launch failures, triggering a complete military and
industry review of U.S. space launch programs, including software integration and testing
processes. Congressional oversight hearings were requested.

1
• A small town in Illinois in the U.S. received an unusually large monthly electric bill of
$7 million in March of 1999. This was about 700 times larger than its normal bill. It
turned out to be due to bugs in new software that had been purchased by the local power
company to deal with Y2K software issues.
• In early 1999 a major computer game company recalled all copies of a popular new
product due to software problems. The company made a public apology for releasing a
product before it was ready.

Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility. This


is illustrated by an old parable:

In ancient China there was a family of healers, one of whom was known throughout the
land and employed as a physician to a great lord. The physician was asked which of his
family was the most skillful healer. He replied,

“I tend to the sick and dying with drastic and dramatic treatments, and on occasion
someone is cured and my name gets out among the lords.”

“My elder brother cures sickness when it just begins to take root, and his skills are known
among the local peasants and neighbors.”

“My eldest brother is able to sense the spirit of sickness and eradicate it before it takes
form. His name is unknown outside our home.”

Why does software have bugs?

• miscommunication or no communication - as to specifics of what an application should


or shouldn’t do (the application’s requirements).
• software complexity - the complexity of current software applications can be difficult to
comprehend for anyone without experience in modern-day software development.
Windows-type interfaces, client-server and distributed applications, data
communications, enormous relational databases, and sheer size of applications have all
contributed to the exponential growth in software/system complexity. And the use of
object-oriented techniques can complicate instead of simplify a project unless it is well-
engineered.
• programming errors - programmers, like anyone else, can make mistakes.
• changing requirements (whether documented or undocumented) - the customer may not
understand the effects of changes, or may understand and request them anyway -
redesign, rescheduling of engineers, effects on other projects, work already completed
that may have to be redone or thrown out, hardware requirements that may be affected,
etc. If there are many minor changes or any major changes, known and unknown
dependencies among parts of the project are likely to interact and cause problems, and the
complexity of coordinating changes may result in errors. Enthusiasm of engineering staff
may be affected. In some fast-changing business environments, continuously modified

1
requirements may be a fact of life. In this case, management must understand the
resulting risks, and QA and test engineers must adapt and plan for continuous extensive
testing to keep the inevitable bugs from running out of control - see ‘What can be done if
requirements are changing continuously?’ in Part 2 of the FAQ.
• time pressures - scheduling of software projects is difficult at best, often requiring a lot
of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
• egos - people prefer to say things like:
‘no problem’
‘piece of cake’
‘I can whip that out in a few hours’
‘it should be easy to update that old code’
instead of:
‘that adds a lot of complexity and we could end up
making a lot of mistakes’
‘we have no idea if we can do that; we’ll wing it’
‘I can’t estimate how long it will take, until I
take a close look at it’
‘we can’t figure out what that old spaghetti code
did in the first place’

If there are too many unrealistic ‘no problem’s’, the result is bugs.

• poorly documented code - it’s tough to maintain and modify code that is badly written
or poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it’s usually the opposite: they get points mostly for quickly
turning out code, and there’s job security if nobody else can understand it (’if it was hard
to write, it should be hard to read’).
• software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?

• A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.
• Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with
productivity so as to keep bureaucracy from getting out of hand.
• For small groups or projects, a more ad-hoc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
• The most value for effort will be in (a) requirements management processes, with a goal
of clear, complete, testable requirement specifications embodied in requirements or
design documentation and (b) design inspections and code inspections.

1
What is verification? validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed. The term ‘IV & V’ refers to Independent
Verification and Validation.

What is a ‘walkthrough’?

A ‘walkthrough’ is an informal meeting for evaluation or informational purposes. Little


or no preparation is usually required.

What’s an ‘inspection’?

An inspection is more formalized than a ‘walkthrough’, typically with 3-8 people


including a moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the purpose is to
find problems and see what’s missing, not to fix anything. Attendees should prepare for
this type of meeting by reading thru the document; most problems will be found during
this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality. Employees who are most skilled at inspections
are like the ‘eldest brother’ in the parable in ‘Why is it often hard for management to get
serious about quality assurance?’. Their skill may have low visibility but they are
extremely valuable to any software development organization, since bug prevention is far
more cost-effective than bug detection.

What kinds of testing should be considered?

• Black box testing - not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application’s code.
Tests are based on coverage of code statements, branches, paths, conditions.
• unit testing - the most ‘micro’ scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.
• incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application’s functionality be
independent enough to work separately before all parts of the program are completed, or
that test drivers be developed as needed; done by programmers or by testers.
• integration testing - testing of combined parts of an application to determine if they
function together correctly. The ‘parts’ can be code modules, individual applications,
client and server applications on a network, etc. This type of testing is especially relevant

2
to client/server and distributed systems.
• functional testing - black-box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn’t mean that the
programmers shouldn’t check that their code works before releasing it (which of course
applies to any stage of testing.)
• system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
• end-to-end testing - similar to system testing; the ‘macro’ end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate.
• sanity testing or smoke testing - typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort. For
example, if the new software is crashing systems every 5 minutes, bogging down systems
to a crawl, or corrupting databases, the software may not be in a ’sane’ enough condition
to warrant further testing in its current state.
• regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially
near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.
• acceptance testing - final testing based on specifications of the end-user or customer, or
based on use by end-users/customers over some limited period of time.
• load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system’s response time degrades or
fails.
• stress testing - term often used interchangeably with ‘load’ and ‘performance’ testing.
Also used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
• performance testing - term often used interchangeably with ’stress’ and ‘load’ testing.
Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements
documentation or QA or Test Plans.
• usability testing - testing for ‘user-friendliness’. Clearly this is subjective, and will
depend on the targeted end-user or customer. User interviews, surveys, video recording of
user sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers.
• install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
• recovery testing - testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.
• security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
• compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
• exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.

2
• ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
• user acceptance testing - determining if software is satisfactory to an end-user or
customer.
• comparison testing - comparing software weaknesses and strengths to competing
products.
• alpha testing - testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users
or others, not by programmers or testers.
• beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.
• mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes (’bugs’) and retesting with the original test
data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large
computational resources.

What are 5 common problems in the software development process?

• poor requirements - if requirements are unclear, incomplete, too general, or not testable,
there will be problems.
• unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
• inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash.
• featuritis - requests to pile on new features after development is underway; extremely
common.
• miscommunication - if developers don’t know what’s needed or customer’s have
erroneous expectations, problems are guaranteed.

What are 5 common solutions to software development problems?

• solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements


that are agreed to by all players. Use prototypes to help nail down requirements.
• realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out.
• adequate testing - start testing early on, re-test after fixes or changes, plan for adequate
time for testing and bug-fixing.
• stick to initial requirements as much as possible - be prepared to defend against changes
and additions once development has begun, and be prepared to explain consequences. If
changes are necessary, they should be adequately reflected in related schedule changes. If
possible, use rapid prototyping during the design phase so that customers can see what to
expect. This will provide them a higher comfort level with their requirements decisions
and minimize changes later on.
• communication - require walkthroughs and inspections when appropriate; make

2
extensive use of group communication tools - e-mail, groupware, networked bug-tracking
tools and change management tools, intranet capabilities, etc.; insure that documentation
is available and up-to-date - preferably electronic, not paper; promote teamwork and
cooperation; use protoypes early on so that customers’ expectations are clarified.

What is software ‘quality’?

Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the ‘customer’ is and their overall influence in the
scheme of things. A wide-angle view of the ‘customers’ of a software development
project might include end-users, customer acceptance testers, customer contract officers,
customer management, the development organization’s
management/accountants/testers/salespeople, future software maintenance engineers,
stockholders, magazine columnists, etc. Each type of ‘customer’ will have their own slant
on ‘quality’ - the accounting department might define quality in terms of profits while an
end-user might define quality as user-friendly and bug-free.

What is ‘good code’?

‘Good code’ is code that works, is bug free, and is readable and maintainable. Some
organizations have coding ’standards’ that all developers are supposed to adhere to, but
everyone has different ideas about what’s best, or what is too many or too few rules.
There are also various theories and metrics, such as McCabe Complexity metrics. It
should be kept in mind that excessive use of standards and rules can stifle productivity
and creativity. ‘Peer reviews’, ‘buddy checks’ code analysis tools, etc. can be used to
check for problems and enforce standards.

For C and C++ coding, here are some typical ideas to consider in setting rules/standards;
these may or may not apply to a particular situation:
• minimize or eliminate use of global variables.
• use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions.
• use descriptive variable names - use both upper and lower case, avoid abbreviations, use
as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.
• function and method sizes should be minimized; less than 100 lines of code is good, less
than 50 lines is preferable.
• function descriptions should be clearly spelled out in comments preceding a function’s
code.
• organize code for readability.
• use whitespace generously - vertically and horizontally
• each line of code should contain 70 characters max.
• one code statement per line.
• coding style should be consistent throught a program (eg, use of brackets, indentations,

2
naming conventions, etc.)
• in adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
• no matter how small, an application should include documentaion of the overall
program function and flow (even a few paragraphs is better than nothing); or if possible a
separate flow chart and detailed program documentation.
• make extensive use of error handling procedures and status and error logging.
• for C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note
that the Java programming language eliminates multiple inheritance and operator
overloading.)
• for C++, keep class methods small, less than 50 lines of code per method is preferable.
• for C++, make liberal use of exception handlers

What is ‘good design’?

‘Design’ could refer to many things, but often refers to ‘functional design’ or ‘internal
design’. Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-
handling and status logging capability; and works correctly when implemented. Good
functional design is indicated by an application whose functionality can be traced back to
customer and end-user requirements. (See further discussion of functional and internal
design in ‘What’s the big deal about requirements?’ in FAQ #2.) For programs that have a
user interface, it’s often a good idea to assume that the end user will have little computer
knowledge and may not read a user manual or even the on-line help; some common
rules-of-thumb include:
• the program should act in a way that least surprises the user
• it should always be evident to the user what can be done next and how to exit
• the program shouldn’t let the users do something stupid without warning them.

What is SEI? CMM? ISO? IEEE? ANSI? Will it help?

• SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the


U.S. Defense Department to help improve software development processes.
• CMM = ‘Capability Maturity Model’, developed by the SEI. It’s a model of 5 levels of
organizational ‘maturity’ that determine effectiveness in delivering quality software. It is
geared to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any organization, and if
reasonably applied can be helpful. Organizations can receive CMM ratings by
undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic efforts required by


individuals to successfully complete projects. Few if any processes in place; successes
may not be repeatable.

2
Level 2 - software project tracking, requirements management, realistic planning, and
configuration management processes are in place; successful practices can be repeated.

Level 3 - standard software development and maintenance processes are integrated


throughout an organization; a Software Engineering Process Group is is in place to
oversee software processes, and training programs are used to ensure understanding and
compliance.

Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The impact of new processes
and technologies can be predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of


those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings
during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software engineering/maintenance
personnel; 32% of organizations were U.S. federal contractors or agencies. For those
rated at
Level 1, the most problematical key process area was in Software Quality Assurance.

• ISO = ‘International Organisation for Standardization’ - The ISO 9001:2000 standard


(which replaces the previous standard of 1994) concerns quality systems that are assessed
by outside auditors, and it applies to many kinds of production and manufacturing
organizations, not just software. It covers documentation, design, development,
production, testing, installation, servicing, and other processes. The full set of standards
consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-
2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 -
Quality Management Systems: Guidelines for Performance Improvements. To be ISO
9001 certified, a third-party auditor assesses an organization, and certification is typically
good for about 3 years, after which a complete reassessment is required. Note that ISO
certification does not necessarily indicate quality products - it indicates only that
documented processes are followed. Also see

http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased
via the ASQ web site at http://e-standards.asq.org/

• IEEE = ‘Institute of Electrical and Electronics Engineers’ - among other things, creates
standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI
Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008),
‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730), and
others.

2
• ANSI = ‘American National Standards Institute’, the primary industrial standards body
in the U.S.; publishes some software-related standards in conjunction with the IEEE and
ASQ (American Society for Quality).

• Other software development process assessment methods besides CMM and ISO 9000
include SPICE, Trillium, TickIT. and Bootstrap.

What is the ’software life cycle’?

The life cycle begins when an application is first conceived and ends when it is no longer
in use. It includes aspects such as initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, retesting, phase-out, and other
aspects.

Will automated testing tools make testing easier?

• Possibly. For small projects, the time needed to learn and implement them may not be
worth it. For larger projects, or on-going long-term projects they can be valuable.
• A common type of automated tool is the ‘record/playback’ type. For example, a tester
could click through all combinations of menu choices, dialog box choices, buttons, etc. in
an application GUI and have them ‘recorded’ and the results logged by a tool. The
‘recording’ is typically in the form of text based on a scripting language that is
interpretable by the testing tool. If new buttons are added, or some underlying code in the
application is changed, etc. the application might then be retested by just ‘playing back’
the ‘recorded’ actions, and comparing the logging results to check effects of the changes.
The problem with such tools is that if there are continual changes to the system being
tested, the ‘recordings’ may have to be changed so much that it becomes very time-
consuming to continuously update the scripts. Additionally, interpretation and analysis of
results (screens, data, logs, etc.) can be a difficult task. Note that there are
record/playback tools for text-based interfaces also, and for all types of platforms.
• Other automated tools can include:
code analyzers - monitor code complexity, adherence to standards, etc.
coverage analyzers - these tools check which parts of the code have been exercised by a
test, and may be oriented to code statement coverage, condition coverage, path coverage,
etc.
memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server and web applications under various
load
levels.
web test tools - to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site’s interactions are secure.

other tools - for test case management, documentation management, bug reporting, and
configuration management.

2
Manual Testing Process
Manual Testing Process :

Process is a roadmap to develop the project is consists a number of sequential steps.

Software Testing Life Cycle:


• Test Plan
• Test Development
• Test Execution
• Analyse Results
• Defect Tracking
• Summarise Report

Test Plan :

It is a document which describes the testing environment, purpose, scope, objectives, test
strategy, schedules, mile stones, testing tool, roles and responsibilities, risks, training,
staffing and who is going to test the application, what type of tests should be performed
and how it will track the defects.

Test Development :

• Preparing test cases


• Preparing test data
• Preparing test procedure
• Preparing test scenario
• Writing test script

Test Execution :

In this phase we execute the documents those are prepared in test development phase.

Analyse result :

Once executed documents will get results either pass or fail. we need to analyse the
results during this phase.

Defect Tracking :

When ever we get defect on the application we need to prepare the bug report file and
forwards to Test Team Lead and Dev Team. The Dev Team will fix the bug. Again we
have to test the application. This cycle repeats till we get the software with our defects.

Summarise Reports :

2
• Test Reports
• Bug Reports
• Test Documentation

Software Testing Life Cycle


<!--[if !supportLists]-->• <!--[endif]-->Identify Test Candidates

• Test Plan
• Design Test Cases
• Execute Tests
• Evaluate Results
• Document Test Results
• Casual Analysis/ Preparation of Validation Reports
• Regression Testing / Follow up on reported bugs.

Test Plan Frequently Asked Questions


1. Why you cannot download a Word version of this test plan.

I have received numerous requests for an MS Word version of the test plan.
However, although the web pages were created directly from an word document, I no
longer have a copy of that original word document.
Also, having prepared numerous test plans, I know that the content is more important
than the format. See the next point for more info on the content of a test plan.

2. What a test plan should contain

A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way
to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the ‘why’ and
‘how’ of product validation. It should be thorough enough to be useful but not so
thorough that no one outside the test group will read it.

A test plan states what the items to be tested are, at what level they will be tested, what
sequence they are to be tested in, how the test strategy will be applied to the testing of
each item, and describes the test environment.

2
A test plan should ideally be organisation wide, being applicable to all of organisations
software developments.

The objective of each test plan is to provide a plan for verification, by testing the
software, the software produced fulfils the functional or design statements of the
appropriate software specification. In the case of acceptance testing and system testing,
this generally means the Functional Specification.

The first consideration when preparing the Test Plan is who the intended audience is – i.e.
the audience for a Unit Test Plan would be different, and thus the content would have to
be adjusted accordingly.

You should begin the test plan as soon as possible. Generally it is desirable to begin the
master test plan as the same time the Requirements documents and the Project Plan are
being developed. Test planning can (and should) have an impact on the Project Plan.
Even though plans that are written early will have to be changed during the course of the
development and testing, but that is important, because it records the progress of the
testing and helps planners become more proficient.

What to consider for the Test Plan:

1. Test Plan Identifier


2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary

3. Standards for Software Test Plans

Several standards suggest what a test plan should contain, including the IEEE.
The standards are:

2
IEEE standards:
829-1983 IEEE Standard for Software Test Documentation
1008-1987 IEEE Standard for Software Unit Testing
1012-1986 IEEE Standard for Software Verification & Validation Plans
1059-1993 IEEE Guide for Software Verification & Validation Plans

The IEEE website is here:http://www.ieee.org

4. Why I published the test plan

Well, when I first went looking for sample test plans, I could not find anything useful.
Eventually I found several sites, which I included on my links page. However, I was not
satisfied with many of the plans that I found, so I posted it on my website. And, I have to
say that I have been astounded by the level of interest shown, and amazed at the number
of emails I have received about it.

6. Copyright, Ownership & what you can do with the plan

Well, I published with the aim that it be used, so if you are going to use it to create a test
plan for internal use, please feel free to copy from it.
However, if the test plan is to be published externally in any way [web, magazine,
training material etc.], then you must include a reference to me, Bazman, and a link to my
website.

Software Testing Techniques


Testing Techniques

• Black Box Testing


• White Box Testing
• Regression Testing
• These principles & techniques can be applied to any type of testing.

Black Box Testing

Testing of a function without knowing internal structure of the program.

White Box Testing

Testing of a function with knowing internal structure of the program.

Regression Testing

To ensure that the code changes have not had an adverse affect to the other modules or on
existing functions.

3
Functional Testing

Study SRS
Identify Unit Functions
For each unit function
• Take each input function
• Identify Equivalence class
• Form Test cases
• Form Test cases for boundary values
• From Test cases for Error Guessing
Form Unit function v/s Test cases, Cross Reference Matrix
Find the coverage

Unit Testing:

The most ‘micro’ scale of testing to test particular functions or code modules. Typically
done by the programmer and not by testers
• Unit - smallest testable piece of software
• A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness
• Unit testing done to show that the unit does not satisfy the functional specification and/
or its implemented structure does not match the intended design structure

Integration Testing:

Integration is a systematic approach to build the complete software structure specified in


the design from unit-tested modules. There are two ways integration performed. It is
called Pre-test and Pro-test.

1. Pre-test: the testing performed in Module development area is called Pre-test. The Pre-
test is required only if the development is done in module development area.
2. Pro-test: The Integration testing performed in baseline is called pro-test. The
development of a release will be scheduled such that the customer can break down into
smaller internal releases.

Alpha testing:

Testing of an application when development is nearing completion minor design changes


may still be made as a result of such testing. Typically done by end-users or others, not by
programmers or testers.

Beta testing:

Testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not
by programmers or

3
System Testing:

• A system is the big component


• System testing is aimed at revealing bugs that cannot be attributed to a component as
such, to inconsistencies between components or planned interactions between
components
• Concern: issues, behaviors that can only be exposed by testing the entire integrated
system (e.g., performance, security, recovery).

Volume Testing:

The purpose of Volume Testing is to find weaknesses in the system with respect to its
handling of large amounts of data during short time periods. For example, this kind of
testing ensures that the system will process data across physical and logical boundaries
such as across servers and across disk partitions on one server.

Stress testing:

This refers to testing system functionality while the system is under unusually heavy or
peak load; it’s similar to the validation testing mentioned previously but is carried out in a
“high-stress” environment. This requires that you make some predictions about expected
load levels of your Web site.

Usability testing:

Usability means that systems are easy and fast to learn, efficient to use, easy to
remember, cause no operating errors and offer a high degree of satisfaction for the user.
Usability means bringing the usage perspective into focus, the side towards the user.

Security testing:

If your site requires firewalls, encryption, user authentication, financial transactions, or


access to databases with sensitive data, you may need to test these and also test your
site’s overall protection against unauthorized internal or external access.

Glass Box Testing

Test case selection that is based on an analysis of the internal structure of the component.
Testing by looking only at the code.
Some times also called “Code Based Testing”. Obviously you need to be a programmer
and you need to have the source code to do this.

Test Case

3
A set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.

Operational Testing

Testing conducted to evaluate a system or component in its operational environment.

Validation

Determination of the correctness of the products of software development with respect to


the user needs and requirements.

Verification

The process of evaluating a system or component to determine whether the products of


the given development phase satisfy the conditions imposed at the start of that phase.

Control Flow

An abstract representation of all possible sequences of events in a program’s execution.

CAST
Acronym for computer-aided software testing.

Metrics
Ways to measure: e.g., time, cost, customer satisfaction, quality.

Software Testing Interview Questions


Part 2
1. What is diff. between CMMI and CMM levels?
A: - CMM: - this is applicable only for software industry. KPAs -18
CMMI: - This is applicable for software, out sourcing and all other industries. KPA - 25

2. What is the scalability testing?


1. Scalabilty is nothing but how many users that the application should handle

2. Scalability is nothing but maximum no of users that the system can handle

3
3. Scalability testing is a subtype of performance test where performance requirements
for response time, throughput, and/or utilization are tested as load on the SUT is
increased over time.

4. As a part of scalability testing we test the expandability of the application. In


scalability we test 1.Applicaation scalability, 2.Performance scalability

Application scalability: to test the possibility of implementing new features in the system
or updating the existing features of the system. With the help of design doc we do this
testing

Performance scalability: To test how the s/w perform when it is subjected to varying
loads to measure and to evaluate the
Performance behavior and the ability for the s/w to continue to function properly under
different workloads.

–> To check the comfort level of an application in terms of user load. And user
experience and system tolerance levels
–> The point within an application that when subjected to increasing workload begin to
degrade in terms of end user experience and system tolerance
–> Response time
Execution time
System resource utilization
Network delays

stress testing

3. What is status of defect when you are performing regression testing?


A:-Fixed Status

4. What is the first test in software testing process?


A) Monkey testing
B) Unit Testing
c) Static analysis
d) None of the above

A: - Unit testing is the first test in testing process, though it is done by developers after
the completion of coding it is correct one.

4. When will the testing starts? a) Once the requirements are Complete b) In
requirement phase?

A: - Once the requirements are complete.

This is Static testing. Here, u r supposed to read the documents (requirements) and it is
quite a common issue in S/w industry that many requirements contradict with other

3
requirements. These are also can be reported as bugs. However, they will be reviewed
before reporting them as bugs (defects).

5. What is the part of Qa and QC in refinement v model?


A: — V model is a kind of SDLC. QC (Quality Control) team tests the developed product
for quality. It deals only with product, both in static and dynamic testing. QA (Quality
Assurance) team works on the process and manages for better quality in the process. It
deals with (reviews) everything right from collecting requirements to delivery.

6. What are the bugs we cannot find in black box?


A: — If there r any bugs in security settings of the pages or any other internal mistake
made in coding cannot be found in black box testing.

7. What are Microsoft 6 rules?


A: — As far as my knowledge these rules are used at user Interface test.
These are also called Microsoft windows standards. They are

. GUI objects are aligned in windows


• All defined text is visible on a GUI object
• Labels on GUI objects are capitalized
• Each label includes an underlined letter (mnemonics)
• Each window includes an OK button, a Cancel button, and a System menu

8. What are the steps to test any software through automation tools?
A: — First, you need to segregate the test cases that can be automated. Then, prepare test
data as per the requirements of those test cases. Write reusable functions which are used
frequently in those test cases. Now, prepare the test scripts using those reusable functions
and by applying loops and conditions where ever necessary. However, Automation
framework that is followed in the organization
should strictly follow through out the process.

9. What is Defect removable efficiency?


A: - The DRE is the percentage of defects that have been removed
during an activity, computed with the equation below. The DRE can also be computed for
each software development activity and plotted on a bar graph to show the relative defect
removal efficiencies for each activity. Or, the DRE may be computed for a specific task
or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.)
Number Defects Removed
DRE = –—————————————————— * 100
Number Defects at Start of Process

DRE=A/A+B = 0.8

A = Testing Team (Defects by testing team)


B = customer ( ” ” customer )

3
If dre <=0.8 then good product otherwise not.

10. Example for bug not reproducible?


A: — Difference in environment
11. During alpha testing why customer people r invited?
A: — becaz alpha testing related to acceptance testing, so,
accepting testing is done in front of client or customer for
there acceptance

12. Difference between adhoc testing and error guessing?


A: — Adhoc testing: without test data r any documents performing testing.

Error Guessing: This is a Test data selection technique. The selection criterion is to pick
values that seem likely to cause errors.

13. Diff between test plan and test strategy?


A: — Test plan: After completion of SRS learning and business requirement gathering
test management concentrate on test planning, this is done by Test lead, or Project lead.

Test Strategy: Depends on corresponding testing policy quality analyst finalizes test
Responsibility Matrix. This is dont by QA. But both r Documents.

14. What is “V-n-V” Model? Why is it called as “V”& why not “U”? Also tell at
what Stage Testing should be best to stared?
A: — It is called V coz it looks like V. the detailed V model is shown below.

SRS Acceptance testing


/
/
HLD (High Level Design) System testing
/
/
LLD (Low level Integration testing
Design) /
/
Unit Testing
/
/
Coding

There is no such stage for which you wait to start testing.


Testing starts as soon as SRS document is ready. You can raise defects that are present in
the document. It’s called verification.

15. What is difference in between Operating System 2000 and OS XP?


A; — Windows 2000 and Windows XP are essentially the same operating system (known
internally as Windows NT 5.0 and Windows NT 5.1, respectively.) Here are some
considerations if you’re trying to decide which version to use:

3
Windows 2000 benefits:

1) Windows 2000 has lower system requirements, and has a simpler interface (no
“Styles” to mess with).
2) Windows 2000 is slightly less expensive, and has no product activation.
3) Windows 2000 has been out for a while, and most of the common problems and
security holes have been uncovered and fixed.
4) Third-party software and hardware products that aren’t yet XP-compatible may be
compatible with Windows 2000; check the manufacturers of your devices and
applications for XP support before you upgrade.

Windows XP benefits:

1) Windows XP is somewhat faster than Windows 2000, assuming you have a fast
processor and tons of memory (although it will run fine with a 300 MHz Pentium II and
128MB of RAM).
2) The new Windows XP interface is more cheerful and colorful than earlier versions,
although the less- cartoon “Classic” interface can still be used if desired.
3 Windows XP has more bells and whistles, such as the Windows Movie Maker, built-in
CD writer support, the Internet Connection Firewall, and Remote Desktop Connection.
4) Windows XP has better support for games and comes with more games than Windows
2000.
5) Manufacturers of existing hardware and software products are more likely to add
Windows XP compatibility now than Windows 2000 compatibility.

Software Testing Interview Questions


Part 3
16. What is bug life cycle?
A: — New: when tester reports a defect
Open: when developer accepts that it is a bug or if the developer rejects the defect, then
the status is turned into “Rejected”
Fixed: when developer make changes to the code to rectify the bug…
Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned
into “Closed” and if the problem resists again, it’s “Reopen

17. What is deferred status in defect life cycle?


A: — Deferred status means the developer accepted the bus, but it is scheduled to rectify
in the next build

18. What is smoke test?


A; — Testing the application whether it’s performing its basic functionality properly or
not, so that the test team can go ahead with the application

3
19. Do you use any automation tool for smoke testing?
A: - Definitely can use.

20. What is Verification and validation?


A: — Verification is static. No code is executed. Say, analysis of requirements etc.
Validation is dynamic. Code is executed with scenarios present in test cases.

21. What is test plan and explain its contents?


A: — Test plan is a document which contains the scope for testing the application and
what to be tested, when to be tested and who to test.

22. Advantages of automation over manual testing?


A: — Time, resource and Money

23. What is ADhoc testing?


A: — AdHoc means doing something which is not planned.

24. What is mean by release notes?


A: — It’s a document released along with the product which explains about the product.
It also contains about the bugs that are in deferred status.

25. Scalability testing comes under in which tool?


A: — Scalability testing comes under performance testing. Load testing, scalability
testing both r same.

26. What is the difference between Bug and Defect?


A: — Bug: Deviation from the expected result. Defect: Problem in algorithm leads to
failure.

A Mistake in code is called Error.

Due to Error in coding, test engineers are getting mismatches in application is called
defect.

If defect accepted by development team to solve is called Bug.

27. What is hot fix?


A: — A hot fix is a single, cumulative package that includes one or more files that are
used to address a problem in a software product. Typically, hot fixes are made to address
a specific customer situation and may not be distributed outside the customer
organization.

Bug found at the customer place which has high priority.

3
28. What is the difference between functional test cases and compatability testcases?
A: — There are no Test Cases for Compatibility Testing; in Compatibility Testing we are
Testing an application in different Hardware and software. If it is wrong plz let me know.

29. What is Acid Testing??


A: — ACID Means:
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable

Mostly this will be done database testing.

30. What is the main use of preparing a traceability matrix?


A: — To Cross verify the prepared test cases and test scripts with user requirements.

To monitor the changes, enhance occurred during the development of the project.

Traceability matrix is prepared in order to cross check the test cases designed against
each requirement, hence giving an opportunity to verify that all the requirements are
covered in testing the application.

Software Testing Interview Questions


Part 4
31. If we have no SRS, BRS but we have test cases does u execute the test cases
blindly or do u follow any other process?
A: — Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.

2) In addition you can refer to Backend, I mean look into the Database. To gain more
knowledge of the application

32. How to execute test case?


A: — There are two ways:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation
pertaining details.

33. Difference between re testing and regression testing?

A: — Retesting: –

3
Re-execution of test cases on same application build with different input values is
retesting.

Regression Testing:

Re-execution of test cases on modifies form of build is called regression testing…

34. What is the difference between bug log and defect tracking?
A; — Bug log is a document which maintains the information of the bug where as bug
tracking is the process.

35. Who will change the Bug Status as Differed?


A: — Bug will be in open status while developer is working on it Fixed after developer
completes his work if it is not fixed properly the tester puts it in reopen After fixing the
bug properly it is in closed state.

Developer

36. wht is smoke testing and user interface testing ?

A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most crucial
functions of a program work, but not bothering with finer details. The term comes to
software testing from a similarly basic type of hardware testing.

UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to
determine the ease with which a user can learn to operate, input, and interpret outputs of a
system or component.

Smoke testing is nothing but to check whether basic functionality of the build is stable or
not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We check all the fields whether they are existing or not as per the
format we check spelling graphic font sizes everything in the window present or not|

37. what is bug, deffect, issue, error?

A: — Bug: — Bug is identified by the tester.


Defect:– Whenever the project is received for the analysis phase ,may be some
requirement miss to get or understand most of the time Defect itself come with the
project (when it comes).
Issue: — Client site error most of the time.
Error: — When anything is happened wrong in the project from the development side i.e.
called as the error, most of the time this knows by the developer.

4
Bug: a fault or defect in a system or machine

Defect: an imperfection in a device or machine;

Issue: An issue is a major problem that will impede the progress of the project and cannot
be resolved by the project manager and project team without outside help

Error:
Error is the deviation of a measurement, observation, or calculation from the truth

38. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the whole functionality of the system or the application
whether it is meeting the functional specifications

Integration testing means testing the functionality of integrated module when two
individual modules are integrated for this we use top-down approach and bottom up
approach

39. what type of testing u perform in organization while u do System Testing, give
clearly?

A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

40. What is the main use of preparing Traceability matrix and explain the real time
usage?

4
A: — A traceability matrix is created by associating requirements with the work products
that satisfy them. Tests are associated with the requirements on which they are based and
the product tested to meet the requirement.

A traceability matrix is a report from the requirements database or repository.

41. How can u do the following 1) Usability testing 2) scalability Testing

A:–
UT:
Testing the ease with which users can learn and use a product.

ST:
It’s a Web Testing defn.allows web site capability improvement.

PT:
Testing to determine whether the system/software meets the specified portability
requirements.

42. What does u mean by Positive and Negative testing & what is the diff’s between
them. Can anyone explain with an example?

A: — Positive Testing: Testing the application functionality with valid inputs and
verifying that output is correct
Negative testing: Testing the application functionality with invalid inputs and verifying
the output.

Difference is nothing but how the application behaves when we enter some invalid inputs
suppose if it accepts invalid input the application
Functionality is wrong

Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called
as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test
to fail” BVA is the best example of -ve testing.

43. what is change request, how u use it?

A: — Change Request is a attribute or part of Defect Life Cycle.

Now when u as a tester finds a defect n report to ur DL…he in turn informs the
Development Team.
The DT says it’s not a defect it’s an extra implementation or says not part of req’ment. Its
newscast has to pay.

Here the status in ur defect report would be Change Request

4
I think change request controlled by change request control board (CCB). If any changes
required by client after we start the project, it has to come thru that CCB and they have to
approve it. CCB got full rights to accept or reject based on the project schedule and cost.

44. What is risk analysis, what type of risk analysis u did in u r project?

A: — Risk Analysis:
A systematic use of available information to determine how often specified events and
unspecified events may occur and the magnitude of their likely consequences

OR

procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures,
and highlight how the impact can be eliminated or reduced

Types :

1.QUANTITATIVE RISK ANALYSIS


2.QUALITATIVE RISK ANALYSIS

45. What is API ?

A:– Application program interface

LoadRunner interview questions

1. What is load testing?

Load testing is to test that if the application works fine with the loads that result from
large number of simultaneous users, transactions and to determine weather it can handle
peak usage periods.

2. What is Performance testing? - Timing for both read and update transactions should be
gathered to determine whether system functions are being performed in an acceptable
timeframe. This should be done standalone and then in a multi user environment to
determine the effect of multiple transactions on the timing of a single transaction.

3. Did u use LoadRunner? What version? - Yes. Version 7.2.

4. Explain the Load testing process?

Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test
scenarios we develop will accomplish load-testing objectives.

Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by
each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.

4
Step 3: Creating the scenario. A scenario describes the events that occur during a testing
session. It includes a list of machines, scripts, and Vusers that run during the scenario. We
create scenarios using LoadRunner Controller. We can create manual scenarios as well as
goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load
generator machines, and percentage of Vusers to be assigned to each script. For web tests,
we may create a goal-oriented scenario where we define the goal that our test has to
achieve. LoadRunner automatically builds a scenario for us.

Step 4: Running the scenario.


We emulate load on the server by instructing multiple Vusers to perform tasks
simultaneously. Before the testing, we set the scenario configuration and scheduling. We
can run the entire scenario, Vuser groups, or individual Vusers.

Step 5: Monitoring the scenario.


We monitor scenario execution using the LoadRunner online runtime, transaction, system
resource, Web resource, Web server resource, Web application server resource, database
server resource, network delay, streaming media resource, firewall server resource, ERP
server resource, and Java performance monitors.

Step 6: Analyzing test results. During scenario execution, LoadRunner records the
performance of the application under different loads. We use LoadRunner.s graphs and
reports to analyze the application.s performance.

5. When do you do load and performance Testing?

We perform load testing once we are done with interface (GUI) testing. Modern system
architectures are large and complex. Whereas single user testing primarily on
functionality and user interface of a system component, application testing focuses on
performance and reliability of an entire system. For example, a typical application-testing
scenario might depict 1000 users logging in simultaneously to a system. This gives rise to
issues such as what is the response time of the system, does it crash, will it go with
different software applications and platforms, can it hold so many hundreds and
thousands of users, etc. This is when we set do load and performance testing.

6. What are the components of LoadRunner?

The components of LoadRunner are The Virtual User Generator, Controller, and the
Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.

7. What Component of LoadRunner would you use to record a Script?

The Virtual User Generator (VuGen) component is used to record a script. It enables you
to develop Vuser scripts for a variety of application types and communication protocols.

8. What Component of LoadRunner would you use to play Back the script in multi user
mode?

4
The Controller component is used to playback the script in multi-user mode. This is done
during a scenario run where a vuser script is executed by a number of vusers in a group.

9. What is a rendezvous point?

You insert rendezvous points into Vuser scripts to emulate heavy user load on the server.
Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to
arrive at a certain point, in order that they may simultaneously perform a task. For
example, to emulate peak load on the bank server, you can insert a rendezvous point
instructing 100 Vusers to deposit cash into their accounts at the same time.

10. What is a scenario?

A scenario defines the events that occur during each testing session. For example, a
scenario defines and controls the number of users to emulate, the actions to be performed,
and the machines on which the virtual users run their emulations.

11. Explain the recording mode for web Vuser script?

We use VuGen to develop a Vuser script by recording a user performing typical business
processes on a client application. VuGen creates the script by recording the activity
between the client and the server. For example, in web based applications, VuGen
monitors the client end of the database and traces all the requests sent to, and received
from, the database server. We use VuGen to: Monitor the communication between the
application and the server; Generate the required function calls; and Insert the generated
function calls into a Vuser script.

12. Why do you create parameters?

Parameters are like script variables. They are used to vary input to the server and to
emulate real users. Different sets of data are sent to the server each time the script is run.
Better simulate the usage model for more accurate testing from the Controller; one script
can emulate many different users on the system.

13. What is correlation?Explain the difference between automatic correlation and manual
correlation?

Correlation is used to obtain data which are unique for each run of the script and which
are generated by nested queries. Correlation provides the value to avoid errors arising out
of duplicate values and also optimizing the code (to avoid nested queries). Automatic
correlation is where we set some rules for correlation. It can be application server
specific. Here values are replaced by data which are created by these rules. In manual
correlation, the value we want to correlate is scanned and create correlation is used to
correlate.
14. How do you find out where correlation is required? Give few examples from your
projects?

4
Two ways: First we can scan for correlations, and see the list of values which can be
correlated. From this we can pick a value to be correlated. Secondly, we can record two
scripts and compare them. We can look up the difference file to see for the values which
needed to be correlated. In my project, there was a unique id developed for each
customer, it was nothing but Insurance Number, it was generated automatically and it was
sequential and this value was unique. I had to correlate this value, in order to avoid errors
while running my script. I did using scan for correlation.

15. Where do you set automatic correlation options?

Automatic correlation from web point of view can be set in recording options and
correlation tab. Here we can enable correlation for the entire script and choose either
issue online messages or offline actions, where we can define rules for that correlation.
Automatic correlation for database can be done using show output window and scan for
correlation and picking the correlate query tab and choose which query value we want to
correlate. If we know the specific value to be correlated, we just do create correlation for
the value and specify how the value to be created.

16. What is a function to capture dynamic values in the web Vuser script?

Web_reg_save_param function saves dynamic data information to a parameter.

17. When do you disable log in Virtual User Generator, When do you choose standard
and extended logs?

Once we debug our script and verify that it is functional, we can enable logging for errors
only. When we add a script to a scenario, logging is automatically disabled. Standard Log
Option: When you select Standard log, it creates a standard log of functions and messages
sent during script execution to use for debugging. Disable this option for large load
testing scenarios. When you copy a script to a scenario, logging is automatically disabled
Extended Log Option: Select extended log to create an extended log, including warnings
and other messages. Disable this option for large load testing scenarios. When you copy a
script to a scenario, logging is automatically disabled. We can specify which additional
information should be added to the extended log using the Extended log options.
18. How do you debug a LoadRunner script?

VuGen contains two options to help debug Vuser scripts-the Run Step by Step command
and breakpoints. The Debug settings in the Options dialog box allow us to determine the
extent of the trace to be performed during scenario execution. The debug information is
written to the Output window. We can manually set the message class within your script
using the lr_set_debug_message function. This is useful if we want to receive debug
information about a small section of the script only.

19. How do you write user defined functions in LR? Give me few functions you wrote in
your previous project?

4
Before we create the User Defined functions we need to create the external library (DLL)
with the function. We add this library to VuGen bin directory. Once the library is added
then we assign user defined function as a parameter. The function should have the
following format: __declspec (dllexport) char* (char*, char*)Examples of user defined
functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user
defined functions used in my earlier project.

20. What are the changes you can make in run-time settings?

The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log -
Under this we have Disable Logging Standard Log and c) Extended Think Time - In
think time we have two options like Ignore think time and Replay think time. d) General -
Under general tab we can set the vusers as process or as multithreading and whether each
step as a transaction.

21. Where do you set Iteration for Vuser testing?

We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run
time settings, Pacing tab, set number of iterations.

22. How do you perform functional testing under load?

Functionality under load can be tested by running several Vusers concurrently. By


increasing the amount of Vusers, we can determine how much load the server can sustain.

23. What is Ramp up? How do you set this?

This option is used to gradually increase the amount of Vusers/load on the server. An
initial value is set and a value to wait between intervals can be specified. To set Ramp
Up, go to ‘Scenario Scheduling Options’

24. What is the advantage of running the Vuser as thread?

VuGen provides the facility to use multithreading. This enables more Vusers to be run per
generator. If the Vuser is run as a process, the same driver program is loaded into memory
for each Vuser, thus taking up a large amount of memory. This limits the number of
Vusers that can be run on a single generator. If the Vuser is run as a thread, only one
instance of the driver program is loaded into memory for the given number of Vusers
(say 100). Each thread shares the memory of the parent driver program, thus enabling
more Vusers to be run per generator.

25. If you want to stop the execution of your script on error, how do you do that?

The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop
executing the Actions section, execute the vuser_end section and end the execution. This
function is useful when you need to manually abort a script execution as a result of a

4
specific error condition. When you end a script using this function, the Vuser is assigned
the status “Stopped”. For this to take effect, we have to first uncheck the .Continue on
error. option in Run-Time Settings.

26. What is the relation between Response Time and Throughput?

The Throughput graph shows the amount of data in bytes that the Vusers received from
the server in a second. When we compare this with the transaction response time, we will
notice that as throughput decreased, the response time also decreased. Similarly, the peak
throughput and highest response time would occur approximately at the same time.

27. Explain the Configuration of your systems?

The configuration of our systems refers to that of the client machines on which we run
the Vusers. The configuration of any client machine includes its hardware settings,
memory, operating system, software applications, development tools, etc. This system
component configuration should match with the overall system configuration that would
include the network infrastructure, the web server, the database server, and any other
components that go with this larger system so as to achieve the load testing objectives.

28. How do you identify the performance bottlenecks?

Performance Bottlenecks can be detected by using monitors. These monitors might be


application server monitors, web server monitors, database server monitors and network
monitors. They help in finding out the troubled area in our scenario which causes
increased response time. The measurements made are usually performance response time,
throughput, hits/sec, network delay graphs, etc.

29. If web server, database and Network are all fine where could be the problem?

The problem could be in the system itself or in the application server or in the code
written for the application.

30. How did you find web server related issues?

Using Web resource monitors we can find the performance of web servers. Using these
monitors we can analyze throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of
downloaded pages per second.

31. How did you find database related issues?

By running .Database. monitor and help of .Data Resource Graph. we can find database
related issues. E.g. You can specify the resource you want to measure on before running
the controller and than you can see database related issues

4
32. Explain all the web recording options?

33. What is the difference between Overlay graph and Correlate graph?

Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-
axis on the merged graph show.s the current graph.s value & Right Y-axis show the value
of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs
against each other. The active graph.s Y-axis becomes X-axis of merged graph. Y-axis of
the graph that was merged becomes merged graph.s Y-axis.

34. How did you plan the Load? What are the Criteria?

Load test is planned to decide the number of users, what kind of machines we are going
to use and from where they are run. It is based on 2 important documents, Task
Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the
information on number of users for a particular transaction and the time of the load. The
peak usage and off-usage are decided from this Diagram. Transaction profile gives us the
information about the transactions name and their priority levels with regard to the
scenario we are deciding.

35. What does vuser_init action contain?

Vuser_init action contains procedures to login to a server.

36. What does vuser_end action contain?

Vuser_end section contains log off procedures.

37. What is think time? How do you change the threshold?

Think time is the time that a real user waits between actions. Example: When a user
receives data from a server, the user may wait several seconds to review the data before
responding. This delay is known as the think time. Changing the Threshold: Threshold
level is the level below which the recorded think time will be ignored. The default value
is five (5) seconds. We can change the think time threshold in the Recording options of
the Vugen.

38. What is the difference between standard log and extended log?

The standard log sends a subset of functions and messages sent during script execution to
a log. The subset depends on the Vuser type Extended log sends a detailed script
execution messages to the output log. This is mainly used during debugging when we
want information about: Parameter substitution. Data returned by the server. Advanced
trace.

4
39. Explain the following functions: - lr_debug_message - The lr_debug_message
function sends a debug message to the output log when the specified message class is set.

lr_output_message - The lr_output_message function sends notifications to the Controller


Output window and the Vuser log file. lr_error_message - The lr_error_message function
sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt
function associates a character string (usually a SQL statement) with a cursor. This
function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches
the next row from the result set.

40. Throughput - If the throughput scales upward as time progresses and the number of
Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to
remain relatively flat as the number of Vusers increased, it would be reasonable to
conclude that the bandwidth is constraining the volume of data delivered.
41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five
different types of goals in a goal oriented scenario:
* The number of concurrent Vusers
* The number of hits per second
* The number of transactions per second
* The number of pages per minute
* The transaction response time that you want your scenario
42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response
time graph you can see that as the number of Vusers increases, the average response time
of the check itinerary transaction very gradually increases. In other words, the average
response time steadily increases as the load increases. At 56 Vusers, there is a sudden,
sharp increase in the average response time. We say that the test broke the server. That is
the mean time before failure (MTBF). The response time clearly began to degrade when
there were more than 56 Vusers running simultaneously.
43. What is correlation? Explain the difference between automatic correlation and manual
correlation?

Correlation is used to obtain data which are unique for each run of the script and which
are generated by nested queries. Correlation provides the value to avoid errors arising out
of duplicate values and also optimizing the code (to avoid nested queries). Automatic
correlation is where we set some rules for correlation. It can be application server
specific. Here values are replaced by data which are created by these rules. In manual
correlation, the value we want to correlate is scanned and create correlation is used to
correlate.

44. Where do you set automatic correlation options?

Automatic correlation from web point of view, can be set in recording options and
correlation tab. Here we can enable correlation for the entire script and choose either
issue online messages or offline actions, where we can define rules for that correlation.
Automatic correlation for database, can be done using show output window and scan for
correlation and picking the correlate query tab and choose which query value we want to

5
correlate. If we know the specific value to be correlated, we just do create correlation for
the value and specify how the value to be created.

45. What is a function to capture dynamic values in the web vuser script?

Web_reg_save_param function saves dynamic data information to a parameter.

WinRunner Functions

Database Functions

db_check

This function captures and compares data from a database.


Note that the checklist file (arg1) can be created only during record.
arg1 - checklist file.

db_connect

This function creates a new connection session with a database.


arg1 - the session name (string)
arg2 - a connection string
for example “DSN=SQLServer_Source;UID=SA;PWD=abc123″

db_disconnect

This function disconnects from the database and deletes the session.
arg1 - the session name (string)

db_dj_convert

This function executes a Data Junction conversion export file (djs).


arg1 - the export file name (*.djs)
arg2 - an optional parameter to override the output file name
arg3 - a boolean optional parameter whether to
include the headers (the default is TRUE)
arg4 - an optional parameter to
limit the records number (-1 is no limit and is the default)

db_execute_query

This function executes an SQL statement.


Note that a db_connect for (arg1) should be called before this function
arg1 - the session name (string)
arg2 - an SQL statement
arg3 - an out parameter to return the records number.

5
db_get_field_value

This function returns the value of a single item of an executed query.


Note that a db_execute_query for (arg1) should be called before this function
arg1 - the session name (string)
arg2 - the row index number (zero based)
arg3 - the column index number (zero based) or the column name.

db_get_headers

This function returns the fields headers and fields number of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 - the session name (string)
arg2 - an out parameter to return the fields number
arg3 - an out parameter to return the concatenation
of the fields headers delimited by TAB.

db_get_last_error

This function returns the last error message of the last ODBC operation.
arg1 - the session name (string)
arg2 - an out parameter to return the last error.

db_get_row

This function returns a whole row of an executed query.


Note that a db_execute_query for (arg1) should be called before this function
arg1 - the session name (string)
arg2 - the row number (zero based)
arg3 - an out parameter to return the concatenation
of the fields values delimited by TAB.

db_record_check

This function checks that the specified record exists in the


database.Note that the checklist file (arg1) can be created
only using the Database Record Verification Wizard.
arg1 - checklist file.
arg2 - success criteria.
arg3 - number of records found.

db_write_records

This function writes the records of an executed query into a file.


Note that a db_execute_query for (arg1) should be called before this function
arg1 - the session name (string)

5
arg2 - the output file name
arg3 - a boolean optional parameter whether to
include the headers (the default is TRUE)
arg4 - an optional parameter to
limit the records number (-1 is no limit and is the default).

ddt_update_from_db

This function updates the table with data from database.


arg1 - table name.
arg2 - query or conversion file (*.sql ,*.djs).
arg3 (out) - num of rows actually retrieved.
arg4 (optional) - max num of rows to retrieve (default - no limit).

GUI-Functions

GUI_add

This function adds an object to a buffer


arg1 is the buffer in which the object will be entered
arg2 is the name of the window containing the object
arg3 is the name of the object
arg4 is the description of the object

GUI_buf_get_desc

This function returns the description of an object


arg1 is the buffer in which the object exists
arg2 is the name of the window containing the object
arg3 is the name of the object
arg4 is the returned description

GUI_buf_get_desc_attr

This function returns the value of an object property


arg1 is the buffer in which the object exists
arg2 is the name of the window
arg3 is the name of the object
arg4 is the property
arg5 is the returned value

GUI_buf_get_logical_name

This function returns the logical name of an object


arg1 is the buffer in which the object exists
arg2 is the description of the object

5
arg3 is the name of the window containing the object
arg4 is the returned name

GUI_buf_new

This function creates a new GUI buffer


arg1 is the buffer name

GUI_buf_set_desc_attr

This function sets the value of an object property


arg1 is the buffer in which the object exists
arg2 is the name of the window
arg3 is the name of the object
arg4 is the property
arg5 is the value

GUI_close

This function closes a GUI buffer


arg1 is the file name.

GUI_close_all

This function closes all the open GUI buffers.

GUI_delete

This function deletes an object from a buffer


arg1 is the buffer in which the object exists
arg2 is the name of the window containing the object
arg3 is the name of the object (if empty, the window will be deleted)

GUI_desc_compare

This function compares two phisical descriptions (returns 0 if the same)


arg1 is the first description
arg2 is the second description

Interview Questions in QTP-I


1. This Quick Test feature allows you select the appropriate add-ins to load with your test.

Add-in Manager

5
2. Name the six columns of the Keyword view.

Item, Operation, Value, Documentation, Assignment, Comment

3. List the steps to change the logical name of an object named “Three Panel” into
“Status Bar” in the Object Repository.

Select Tools>Object Repository. In the Action1 object repository list of objects, select an
object, right click and select Rename from the pop-up menu.

4. Name the two run modes available in Quick Test Professional.

Normal and Fast

5. When Quick Test Professional is connected to Quality Center, all automated assets
(e.g. tests, values) are stored in Quality Center.

True

6. What information do you need when you select the Record and Run Setting – Record
and run on these applications (opened when a session begins)?

Application details (name, location, any program arguments)

7. Name and discuss the three elements that make up a recorded step.

Item – the object recorded, Operation – the action performed on the object, Value – the
value selected, typed or set for the recorded object

8. There are two locations to store run results. What are these two locations and discuss
each.

New run results folder – permanently stores a copy of the run results in a separate
location under the automated test folder.

Temporary run results folder – Run results can be overwritten every time the test is
played back.

9. True or False: The object class Quick Test uses to identify an object is a property of the
object.

False

10. True or False: You can modify the list of pre-selected properties that Quick Test uses
to identify an object.

5
True

11. True or False: A synchronization step instructs Quick Test to wait for a state of a
property of an object before proceeding to the next recorded step. Synchronization steps
are activated only during recording.

True

12. Manually verifying that an order number was generated by an application and
displayed on the GUI is automated in Quick Test using what feature?

a Checkpoint

13. True or False: Quick Test can automate verification which are not visible on the
application under test interface.

True

14. What is Checkpoint Timeout Value?.

A checkpoint timeout value specifies the time interval (in seconds) during which Quick
Test attempts to perform the checkpoint successfully. Quick Test continues to perform the
checkpoint until it passes or until the timeout occurs. If the checkpoint does not pass
before the timeout occurs, the checkpoint fails.

15. True or False:You can modify the name of a checkpoint for better readability.

Ans: False

16. Define a regular expression for the following range of values


a. Verify that the values begin with Afx3 followed by 3 random digits
Afx3459, Afx3712, Afx3165

b. Verify that a five-digit value is included in the string


Status code 78923 approv

Afx3\d{3}
Status Code \d{5} approved

17. Write the letter of the type of parameter that best corresponds to the requirement:
a. An order number has to be retrieved from the window and saved into a file for each test
run.
b. A value between 12 and 22 is entered in no particular order for every test run.
c. Every iteration of a test should select a new city from a list item.

5
A. Environment Parameter
B. Input Parameter
C. Component Parameter
D. Output Parameter
E. Random Parameter

D, E, B

18. This is the Data Table that contains values retrieved from the application under test.
You can view the captured values after the test run, from the Test Results. What do you
call this data table?.

Run-Time Data Table

19. Name and describe each of the four types of trigger events.

Ans:A pop up window appears in an opened application during the test run.
A property of an object changes its state or value.
A step in the test does not run successfully.
An open application fails during the test run.

20. Explain initial and end conditions.

Ans: Initial and end conditions refer to starting and end points of a test that allows the test
to iterate from the same location, and with the same set up every time (e.g. all fields are
blank, the test starts at the main menu page).

21. What record and run setting did you select so that the test iterates starting at the home
page?

Ans: Record and run test on any open Web browser.

22. What Quick Test features did you use to automate step 4 in the test case? What
property did you select?

Ans: Standard checkpoint on the list item Number of Tickets


Properties: items count, inner text, all items

23. Select Tools> Object Repository. In the Action1 object repository list of objects,
select an object, right click and select Rename from the pop-up menu.

Ans: Input Parameter

24. What planning considerations did you have to perform in order to meet the above
listed requirements?.

5
Ans:Register at least three new users prior to creating the automated test in order to have
seed data in the database.

25. What Quick Test feature did you use to meet the requirement:
“The test should iterate at least three times using different user names and passwords”

Ans: Random parameter, range 1 to 4

26.Discuss how you automated the requirement:


“Each name used during sign-in should the be first name used when booking the ticket at
the Book a Flight page.”

Ans: The username is already an input parameter. Parameterize the step ‘passFirst0’
under BookAFlight and use the parameter for username.

27.Challenge: What Quick Test feature did you use to meet the requirement:
“All passwords should be encrypted”

Ans: Challenge: From the Data table, select the cell and perform a right mouse click. A
pop up window appears. Select Data > Encrypt.

Interview Questions on QTP


1. What are the Features & Benefits of Quick Test Pro (QTP)?

1. Key word driven testing


2. Suitable for both client server and web based application
3. Vb script as the scripts language
4. Better error handling mechanism
5. Excellent data driven testing features

2. Where can I get Quick Test pro (QTP Pro) software? This is Just for Information
purpose only.

Introduction to Quick Test Professional 8.0, Computer Based Training: Please find the
step to get Quick Test Professional 8.0 CBT Step by Step Tutorial and Evaluation copy of
the software. The full CBT is 162 MB. You will have to create account to be able to
download evaluation copies of CBT and Software.

3. How to handle the exceptions using recovery scenario manager in QTP?

You can instruct QTP to recover unexpected events or errors that occurred in your testing
environment during test run. Recovery scenario manager provides a wizard that guides
you through the defining recovery scenario. Recovery scenario has three steps

5
1. Triggered Events
2. Recovery steps
3. Post Recovery Test-Run

4. What is the use of Text output value in Qtp?

Output values enable to view the values that the application talks during run time. When
parameterized, the values change for each iteration. Thus by creating output values, we
can capture the values that the application takes for each run and output them to the data
table.

5. How to use the Object spy in QTP 8.0 versions?

There are two ways to Spy the objects in QTP


1) Thru file toolbar

—In the File Toolbar click on the last toolbar button (an icon showing a person with hat).

2) Tru Object repository Dialog

—In Object repository dialog click on the button “object spy…”


In the Object spy Dialog click on the button showing hand symbol.
the pointer now changes in to a hand symbol and we have to point out the object to spy
the state of the object

If at all the object is not visible. Or window is minimized then Hold the Ctrl button and
activate the required window to and release the Ctrl button.

6. What is the file extension of the code file & object repository file in QTP?

File extension of
– Per test object rep: - filename.mtr
– Shared Object rep: - filename.tsr
Code file extension id script.mts

7. Explain the concept of object repository & how QTP recognizes objects?

Object Repository: displays a tree of all objects in the current component or in the current
action or entire test (depending on the object repository mode you selected).
We can view or modify the test object description of any test object in the repository or to
add new objects to the repository.

Quick test learns the default property values and determines in which test object class it
fits. If it is not enough it adds assertive properties, one by one to the description until it
has compiled the unique description. If no assistive properties are available, then it adds a
special Ordinal identifier such as objects location on the page or in the source code.

5
8. What are the properties you would use for identifying a browser & page when using
descriptive programming?

“Name” would be another property apart from “title” that we can use.
OR
We can also use the property “micClass”.
Ex: Browser (”micClass: =browser”).page (”micClass: =page”)….

9. What are the different scripting languages you could use when working with QTP?

Visual Basic (VB), XML, JavaScript, Java, HTML

10. Give me an example where you have used a COM interface in your QTP project?

11. Few basic questions on commonly used Excel VBA functions.


Common functions are:
Coloring the cell
Auto fit cell
setting navigation from link in one cell to other
saving

12. Explain the keyword create object with an example.

Creates and returns a reference to an Automation object

Syntax: Create Object (servername.typename [, location])

Arguments
servername: Required. The name of the application providing the object.
typename: Required. The type or class of the object to create.
Location: Optional. The name of the network server where the object is to be created.

13. Explain in brief about the QTP Automation Object Model.

Essentially all configuration and run functionality provided via the Quick Test interface is
in some way represented in the Quick Test automation object model via objects, methods,
and properties. Although a one-on-one comparison cannot always be made, most dialog
boxes in Quick Test have a corresponding automation object, most options in dialog
boxes can be set and/or retrieved using the corresponding object property, and most menu
commands and other operations have corresponding automation methods. You can use
the objects, methods, and properties exposed by the Quick Test automation object model,
along with standard programming elements such as loops and conditional statements to
design your program.

14. How to handle dynamic objects in QTP?

6
QTP has a unique feature called Smart Object Identification/recognition. QTP generally
identifies an object by matching its test object and run time object properties. QTP may
fail to recognize the dynamic objects whose properties change during run time. Hence it
has an option of enabling Smart Identification, wherein it can identify the objects even if
their properties changes during run time.
Check this out-

If Quick Test is unable to find any object that matches the recorded object description, or
if it finds more than one object that fits the description, then Quick Test ignores the
recorded description, and uses the Smart Identification mechanism to try to identify the
object.

While the Smart Identification mechanism is more complex, it is more flexible, and thus,
if configured logically, a Smart Identification definition can probably help Quick Test
identify an object, if it is present, even when the recorded description fails.
The Smart Identification mechanism uses two types of properties:

Base filter properties—the most fundamental properties of a particular test object class;
those whose values cannot be changed without changing the essence of the original
object. For example, if a Web link’s tag was changed from to any other value; you could
no longer call it the same object. Optional filter properties—other properties that can help
identify objects of a particular class as they are unlikely to change on a regular basis, but
which can be ignored if they are no longer applicable.

15. What is a Run-Time Data Table? Where can I find and view this table?

In QTP, there is data table used, which is used at runtime.


-In QTP, select the option View->Data table.
-This is basically an excel file, which is stored in the folder of the test created, its name is
Default.xls by default.

16. How does Parameterization and Data-Driving relate to each other in QTP?

To data drive we have to parameterize.i.e. We have to make the constant value as


parameter, so that in each iteraration (cycle) it takes a value that is supplied in run-time
datatable. Through parameterization only we can drive a transaction (action) with
different sets of data. You know running the script with the same set of data several times
is not suggestible, & it’s also of no use.

17. What is the difference between Call to Action and Copy Action?

Call to Action: The changes made in Call to Action, will be reflected in the original action
(from where the script is called).But where as in Copy Action, the changes made in the
script, will not effect the original script (Action)

18. Discuss QTP Environment.

6
Quick Test Pro environment using the graphical interface and Active Screen technologies
- A testing process for creating test scripts, relating manual test requirements to
automated verification features - Data driving to use several sets of data using one test
script.

19. Explain the concept of how QTP identifies object.

During recording qtp looks at the object and stores it as test object. For each test object
QT learns a set of default properties called mandatory properties, and look at the rest of
the objects to check whether this properties are enough to uniquely identify the object.
During test run, QT searches for the run time objects that match with the test object it
learned while recording.

20. Differentiate the two Object Repository Types of QTP.

Object repository is used to store all the objects in the application being tested.2 types of
object repository per action and shared. In shared repository only one centralized
repository for all the tests. Where as in per action for each test a separate per action
repository is created.

21. What the differences are and best practical application of each.

Per Action: For Each Action, one Object Repository is created.


Shared: One Object Repository is used by entire application

22. Explain what the difference between Shared Repository and Per_Action Repository

Shared Repository: Entire application uses one Object Repository, that similar to Global
GUI Map file in Win Runner
Per Action: For each Action, one Object Repository is created, like GUI map file per test
in Win Runner
23. Have you ever written a compiled module? If yes tell me about some of the functions
that you wrote.

I used the functions for capturing the dynamic data during runtime. Function used for
Capturing Desktop, browser and pages.

24. What projects have you used Win Runner on? Tell me about some of the challenges
that arose and how you handled them.

Pbs: WR fails to identify the object in GUI. If there is a non std window obj WR cannot
recognize it, we use GUI SPY for that to handle such situation.

25. Can you do more than just capture and playback?

6
I have done dynamically capturing the objects during runtime in which no recording, no
playback and no use of repository is done AT ALL.

-It was done by the windows scripting using the DOM (Document Object Model) of the
windows.

26. How long have you used the product?

27. How to do the scripting. Are there any inbuilt functions in QTP as in QTP-S? Whatz
the difference between them? How to handle script issues?

Yes, there’s an in-built functionality called “Step Generator” in Insert->Step->Step


Generator -F7, which will generate the scripts as u enter the appropriate steps.

28. What is the difference between checkpoint and output value.

An output value is a value captured during the test run and entered in the run-time but to
a specified location.

EX:-Location in Data Table [Global sheet / local sheet]

29. IF we use batch testing, the result is shown for last action only. In such cases that how
can I get result for every action?

U can click on the icon in the tree view to view the result of every action

30. How the exception handling can be done using QTP

It can be done Using the Recovery Scenario Manager which provides a wizard that
guides you through the process of defining a recovery scenario. FYI. The wizard could be
accessed in QTP> Tools-> Recovery Scenario Manager.

31. How do you test Siebel application using qtp?

32. How many types of Actions are there in QTP?

There are three kinds of actions:

Non-reusable action—an action that can be called only in the test with which it is stored,
and can be called only once.
Reusable action—an action that can be called multiple times by the test with which it is
stored (the local test) as well as by other tests.

External action—a reusable action stored with another test. External actions are read-only
in the calling test, but you can choose to use a local, editable copy of the Data Table
information for the external action.

6
33. How do you data drive an external spreadsheet?

34. I want to open a Notepad window without recording a test and I do not want to use
SystemUtil.Run command as well how do I do this?

U can still make the notepad open without using the record or System utility script, just
by mentioning the path of the notepad “( i.e., where the notepad.exe is stored in the
system) in the “Windows Applications Tab” of the “Record and Run Settings window.

You might also like