You are on page 1of 39

Manual Testing Interview Questions

What is Regression testing?


Re-testing after fixes or modifications of the software or its environment. It can be difficult to
determine how much re-testing is needed, especially near the end of the development cycle.
Automated testing tools can be especially useful for this type of testing..

Regression testing is done in the following cases:


1. If the bugs reported in the previous build are fixed
2. If a new functionality is added
3. If the environment changes
Regression testing is done to ensure that the functionality which was working in the previous build
was not disturbed due to the modifications in the build.
It is done to check that the code changes did not introduce any new bugs or disturb the previous
functionality

When do you start developing your automation tests?


First, the application has to be manually tested. Once the manual testing is over and baseline is
established.

What is a successful product?


A bug free product, meeting the expectations of the user would make the product successful.

What you will do during the first day of job?


Get acquainted with my team and application

Who should test your code?


QA Tester

How do we regression testing?


Various automation testing tools can be used to perform regression testing like WinRunner,
Rational Robot and Silk Test.

Why do we do regression testing?


In any application new functionalities can be added so the application has to be tested to see
whether the added functionalities have affected the existing functionalities or not. Here instead of
retesting all the existing functionalities baseline scripts created for these can be rerun and tested.

In a Calculator, what is the major functionality, you are going to Test, which has been built
specifically for a accountant? Assume that all basic functions like addition, subtraction etc
are supported.
Check the maximum numbers of digits it supports?
Check for the memory?
Check for the accuracy due to truncation

1
Difference between Load Testing & Stress Testing?
Load Testing : the application is tested within the normal limits to identify the load that a system
can with stand. In Load testing the no. of users varies.
Stress Testing: Stress tests are designed to confront programs with abnormal situations. Stress
testing executes a system in a manner that demand rescues in abnormal quantity, frequency or
volume.

If you have shortage of time, how would you prioritize you testing?
1) Use risk analysis to determine where testing should be focused. Since it's rarely possible to test
every possible aspect of an application, every possible combination of events, every dependency,
or everything that could go wrong, risk analysis is appropriate to most software development
projects. Considerations can include:

•Which functionality is most important to the project's intended purpose?


•Which functionality is most visible to the user?
•Which functionality has the largest safety impact?
•Which functionality has the largest financial impact on users?
•Which aspects of the application are most important to the customer?
•Which aspects of the application can be tested early in the development cycle?
•Which parts of the code are most complex, and thus most subject to errors?
•Which parts of the application were developed in rush or panic mode?
•Which aspects of similar/related previous projects caused problems?
•Which aspects of similar/related previous projects had large maintenance expenses?
•Which parts of the requirements and design are unclear or poorly thought out?
•What do the developers think are the highest-risk aspects of the application?
•What kinds of problems would cause the worst publicity?
•What kinds of problems would cause the most customer service complaints?
•What kinds of tests could easily cover multiple functionalities?
•Which tests will have the best high-risk-coverage to time-required ratio?

2) We work on the major functionalities Which functionality is most visible to the user, Which
functionality is most important to the project, which application is most important to the customer,
highest-risk aspects of the application

Who in the company is responsible for Quality?


Both development and quality assurance departments are responsible for the final product quality

2) Quality assurance teams. For both Development and testing side.

Should we test every possible combination/scenario for a program?


Ideally, yes we should test every possible scenario, but this may not always be possible. It depends
on many factors viz., deadlines, budget, complexity of software and so on. In such cases, we have
to prioritize and thoroughly test the critical areas of the application

2) Yes, we should test every possible scenario, but some time the same functionality occurs again
and again like LOGIN WINDOW so there is no need to test those functionalities again. There are

2
some more factors:

Priority of the application.


Time or deadline.
Budget.

How will you describe testing activities?


Testing planning, scripting, execution, defect reporting and tracking, regression testing.

What is the purpose of the testing?


Testing provides information whether or not a certain product meets the requirements.

When should testing be stopped?


This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:

-Deadlines (release deadlines, testing deadlines, etc.)

-Test cases completed with certain percentage passed

-Test budget depleted

-Coverage of code/functionality/requirements reaches a specified point

-Bug rate falls below a certain level

-Beta or alpha testing period ends

Do you have a favorite QA book? Why?


Effective Methods for Software Testing - Perry, William E.

It covers the whole software lifecycle, starting with testing the project plan and estimates and
ending with testing the effectiveness of the testing process. The book is packed with checklists,
worksheets and N-step procedures for each stage of testing.

What are the roles of glass-box and black-box testing tools?


Glass-box testing also called as white-box testing refers to testing, with detailed knowledge of the
modules internals. Thus these tools concentrate more on the algorithms, data structures used in
development of modules. These tools perform testing on individual modules more likely than the
whole application. Black-Box testing tools refer to testing the interface, functionality and
performance testing of the system module and the whole system.

How do we regression testing?


Various automation testing tools can be used to perform regression testing like WinRunner,
Rational Robot and Silk Test

3
Why do we do regression testing?
In any application new functionalities can be added so the application has to be tested to see
whether the added functionalities have affected the existing functionalities or not. Here instead of
retesting all the existing functionalities baseline scripts created for these can be rerun and tested.

What is the value of a testing group? How do you justify your work and budget?
All software products contain defects/bugs, despite the best efforts of their development teams. It
is important for an outside party (one who is not developer) to test the product from a viewpoint
that is more objective and representative of the product user.
Testing group test the software from the requirements point of view or what is required by the
user. Testers job is to examine a program and see if it does not do what it is supposed to do and
also see what it does what it is not supposed to do.

At what stage of the SDLC does testing begin in your opinion?


QA process starts from the second phase of the Software Development Life Cycle i.e. Define the
System. Actual Product testing will be done on Test the system phase(Phase-5). During this phase
test team will verify the actual results against expected results

Explain the software development lifecycle.


There are seven stages of the software development lifecycle

1.Initiate the project – The users identify their Business requirements.

2.Define the project – The software development team translates the business requirements into
system specifications and put together into System Specification Document.

3.Design the system – The System Architecture Team design the system and write Functional
Design Document. During design phase general solutions re hypothesized and data and process
structures are organized.

4.Build the system – The System Specifications and design documents are given to the
development team code the modules by following the Requirements and Design document.

5.Test the system - The test team develops the test plan following the requirements. The software
is build and installed on the test platform after developers have completed development and Unit
Testing. The testers test the software by following the test plan.

6.Deploy the system – After the user-acceptance testing and certification of the software, it is
installed on the production platform. Demos and training are given to the users.

7.Support the system - After the software is in production, the maintenance phase of the life
begins. During this phase the development team works with the development document staff to
modify and enhance the application and the test team works with the test documentation staff to
verify and validate the changes and enhancement to the application software.

4
FREQUENTLY ASKED QUESTIONS

1) What are your roles and responsibilities as a tester?

2) Explain Software development life cycle

3) What is master test plan? What it contains? Who is responsible for writing it?

4) What is test plan? Who is responsible for writing it? What it contains?

5) What different type of test cases you wrote in the test plan?

6) Why test plan is controlled document?

7) What information you need to formulate test plan?

8) What template you used to write testplan?

9) What is MR?

10) Why you write MR?

11) What information it contains.?

12) Give me few examples of the MRs you wrote.

13) What is Whit Box/Unit testing?

14) What is integration testing?

15) What is black box testing?

16) What knowledge you require to do the white box, integration and black box testing?

17) How many testers were in the test team?

18) What was the test team hierarchy?

19) Which MR tool you used to write MR?

20) What is regression testing?

21) Why we do regression testing?

22) How we do regression testing?

5
23) What are the different automation tools you kno?.

24) What is difference between regression automation tool and performance automation tool?

25) What is client server architecture?

26) What is three tier and multi-tier architecture?

27) What is Internet?

28) What is intranet?

29) What is extranet?

30) How Intranet is different from client-server?

31) What is different about Web Testing than Client server testing?

32) What is byte code file?

33) What is an Applet?

34) How applet is different from application?

35) What is Java Virtual Machine?

36) What is ISO-9000?

37) What is QMO?

38) What are the different phases of software development cycle?

39) How do help developers to track the faults is the software?

40) What are positive scenarios?

41) What are negative scenarios?

42) What are individual test cases?

43) What are workflow test cases?

44) If we have executed individual test cases, why we do workflow scenarios?

45) What is object oriented model?

6
46) What is procedural model?

47) What is an object?

48) What is class?

49) What is encapsulation? Give one example

50) What is inheritance? Give example.

51) What is Polymorphism? Give example.

52) What are the different types of MRs?

53) What is test Metrics?

54) What is the use Metrics?

55) How we decide which automation tool we are going to use for the regression testing?

56) If you have shortage of time, how would you prioritize you testing?

57) What is the impact of environment of the actual results of performance testing?

58) What is stress testing, performance testing, Security testing, Recovery testing and volume
testing.

59) What criteria you will follow to assign severity and due date to the MR.

60) What is user acceptance testing?

61) What is manual testing and what is automated testing?

62) What are build, version, and release.

63) What are the entrance and exit criteria in the system test?

64) What are the roles of Test Team Leader

65) What are the roles of Sr. Test Engineer

66) What are the roles of QA analyst/QA Tester

67) How do you decide what functionalities of the application are to be tested?

68) If there are no requirements, how will you write your test plan?

7
69) What is smoke testing?

70) What is soak testing?

71) What is a pre-condition data?

72) What are the different documents in QA?

73) How do you rate yourself in software testing

74) With all the skills, do you prefer to be a developer or a tester? And Why?

75) What are the best web sites that you frequently visit to upgrade your QA skills?

76) Are the words “Prevention” and “Detection” sound familiar? Explain

77) Is defect resolution a technical skill or interpersonal skill from QA view point?

78) Can you automate all the test scripts? Explain

79) What is End to End business logic testing?

80) Explain to me about a most critical defect you found in your last project.

What is integration, and how u will execute this?


A. Integrated System Testing (IST) is a systematic technique for validating the construction of the
overall Software structure while at the same time conducting tests to uncover errors associated
with interfacing. The objective is to take unit tested modules and test the overall Software structure
that has been dictated by design. IST can be done either as Top down integration (Stubs) or Bottom
up Integration (Drivers).

Suppose there are 1000 bugs, and there is only 10 days to go for release the product.
Developer's said that it can't be fixed within this period, then what u will do.
A. In this case, most critical bugs should be fixed first, such as Severity 1 & 2 bugs, and rest of the
bug can be fixed in the next release. Again it completely depends on the business people.

In Sp testing, don't u think, u r doing Unit testing.


A. If we look in the developer's pointer of view, then yes, it is a kind of unit testing. But from a
tester point of view, the tester tests the Store proc. in more detail than a developer.

What is regression testing, and how it started and end. Assume that in one module u found a
bug, u send that to the developer end to fix that bug, but after that bug fixed, how will u do
the regression testing and how will u end that.

8
A. Regression Testing is re testing unchanged segments of the application system, it normally
involves re-running test that have been previously executed to ensure that the same results can be
achieved currently as were achieved when the segment was last tested.. For example, the tester get
a bug in a module, after getting that module, tester sends that part to the developers end to fix that
bug. After fixing that bug, that module comes to the tester end. After received, the tester again test
that module and find out that, whether all the bugs are fixed or not. If those bug are fixed and after
that the tester have to checked out that by fixing these bugs, whether the developer made some
idiot move, and it leads to rise other bugs in other modules. So the tester has to regressively test
different modules.

Understandability
The more information we have, the smarter we will test.

•The design is well understood


•Dependencies between internal external and shared components are well understood.
•Changes to the design are communicated.
•Technical documentation is instantly accessible
•Technical documentation is well organized
•Technical documentation is specific and detailed
•Technical documentation is accurate

Stability
The fewer the changes, the fewer the disruptions to testing

•Changes to the software are infrequent


•Changes to the software are controlled
•Changes to the software do not invalidate existing tests
•The software recovers well from failures

Simplicity
The less there is to test, the more quickly it can be tested

•Functional simplicity
•Structural simplicity
•Code simplicity

Decomposability
By controlling the scope of testing, problems can be isolated quickly, and smarter testing can
be performed.

•The software system is built from independent modules


•Software modules can be tested independently

Controllability
The better the software is controlled, the more the testing can be automated and optimized.

9
•All possible outputs can be generated through some combination of input
•All code is executable through some combination of input
•Software and hardware states can be controlled directly by testing
•Input and output formats are consistent and structured
•Tests can be conveniently specified, automated, and reproduced.

Observability
What is seen is what is tested

•Distinct output is generated for each input


•System states and variables are visible or queriable during execution
•Past system states and variables are visible or queriable ( e.g., transaction logs)
•All factors affecting the output are visible
•Incorrect output is easily identified
•Incorrect input is easily identified
•Internal errors are automatically detected through self-testing mechanism
•Internally errors are automatically reported
•Source code is accessible

Software Testing Requirements

Software testing is not an activity to take up when the product is ready. An effective testing begins
with a proper plan from the user requirements stage itself. Software testability is the ease with
which a computer program is tested. Metrics can be used to measure the testability of a product.
The requirements for effective testing are given in the following sub-sections.

Testing Principles
The basic principles for effective software testing are as follows:

•A good test case is one that has a high probability of finding an as-yet undiscovered error.
•A successful test is one that uncovers an as-yet-undiscovered error.
•All tests should be traceable to the customer requirements
•Tests should be planned long before testing begins
•Testing should begin “ in the small” and progress towards testing “in the large”
•Exhaustive testing is not possible

Testing Objectives
Testing is a process of executing a program with the intent of finding an error.
Software testing is a critical element of software quality assurance and represents the ultimate
review of system specification, design and coding. Testing is the last chance to uncover the errors
/ defects in the software and facilitates delivery of quality system.

Who will attend the User Acceptance Tests?


The MIS Development Unit is working with relevant Practitioner Groups and managers to identify
the people who can best contribute to system testing. Most of those involved in testing will also

10
have been involved in earlier discussions and decision making about the system set-up. All users
will receive basic training to enable them to contribute effectively to the test.

What are the objectives of a User Acceptance Test?


Objectives of the User Acceptance Test are for a group of key users to:
? Validate system set-up for transactions and user access
? Confirm use of system in performing business processes
? Verify performance on business critical functions
? Confirm integrity of converted and additional data, for example values that appear in a look-
up table
? Assess and sign off go-live readiness

What does the User Acceptance Test cover?


The scope of each User Acceptance Test will vary depending on which business process we are
testing. In general however, all tests will cover the following broad areas:
? A number of defined test cases using quality data to validate end-to-end business processes
? A comparison of actual test results against expected results
? A meeting/discussion forum to evaluate the process and facilitate issue resolution

What is a User Acceptance Test?


A User Acceptance Test is:
? A chance to completely test business processes and software
? A scaled-down or condensed version of the system
? The final UAT for each module will be the last chance to perform the above in a test situation

What if the application has functionality that wasn't in the requirements?


It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.

What can be done if requirements are changing continuously?


A common problem and a major headache.

•Work with the project's stakeholders early on to understand how requirements might change so
that alternate test plans and strategies can be worked out in advance, if possible.

•It's helpful if the application's initial design allows for some adaptability so that later changes do
not require redoing the application from scratch.

•If the code is well-commented and well-documented this makes changes easier for the developers.

11
•Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.

•The project's initial schedule should allow for some extra time commensurate with the possibility
of changes.

•Try to move new requirements to a 'Phase 2' version of an application, while using the original
requirements for the 'Phase 1' version.

•Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.

•Be sure that customers and management understand the scheduling impacts, inherent risks, and
costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted - after all, that's their job.

•Balance the effort put into setting up automated testing with the expected effort required to re-do
them to deal with changes.

•Try to design some flexibility into automated test scripts.

•Focus initial automated testing on application aspects that are most likely to remain unchanged.

•Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

•Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)

•Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding
of the added risk that this entails).

What if there isn't enough time for thorough testing?


Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination
of events, every dependency, or everything that could go wrong, risk analysis is appropriate to
most software development projects. This requires judgment skills, common sense, and
experience. (If warranted, formal methods are also available.)

Considerations can include:

•Which functionality is most important to the project's intended purpose?

•Which functionality is most visible to the user?

•Which functionality has the largest safety impact?

12
•Which functionality has the largest financial impact on users?

•Which aspects of the application are most important to the customer?

•Which aspects of the application can be tested early in the development cycle?

•Which parts of the code are most complex, and thus most subject to errors?

•Which parts of the application were developed in rush or panic mode?

•Which aspects of similar/related previous projects caused problems?

•Which aspects of similar/related previous projects had large maintenance expenses?

•Which parts of the requirements and design are unclear or poorly thought out?

•What do the developers think are the highest-risk aspects of the application?

•What kinds of problems would cause the worst publicity?

•What kinds of problems would cause the most customer service complaints?

•What kinds of tests could easily cover multiple functionalities?

•Which tests will have the best high-risk-coverage to time-required ratio?

What steps are needed to develop and run software tests?


The following are some of the steps to consider:

•Obtain requirements, functional design, and internal design specifications and other necessary
documents.

•Obtain budget and schedule requirements

•Determine project-related personnel and their responsibilities, reporting requirements, required


standards and processes (such as release processes, change processes, etc.)

•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of
tests

•Determine test approaches and methods - unit, integration, functional, system, load, usability
tests, etc.

•Determine test environment requirements (hardware, software, communications, etc.)

13
•Determine testware requirements (record/playback tools, coverage analyzers, test tracking,
problem/bug tracking, etc.)

•Determine test input data requirements

•Identify tasks, those responsible for tasks, and labor requirements

•Set schedule estimates, timelines, milestones

•Determine input equivalence classes, boundary value analyses, error classes

•Prepare test plan document and have needed reviews/approvals

•Write test cases

•Have needed reviews/inspections/approvals of test cases

•Prepare test environment and testware, obtain needed user manuals/reference


documents/configuration guides/installation guides, set up test tracking processes, set up logging
and archiving processes, set up or obtain test input data

•Obtain and install software releases

•Perform tests

•Evaluate and report results

•Track problems/bugs and fixes

•Retest as needed

•Maintain and update test plans, test cases, test environment, and testware through life cycle.

What's the role of documentation in QA?


Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should
be documented such that they are repeatable. Specifications, designs, business rules, inspection
reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should
all be documented. There should ideally be a system for easily finding and obtaining documents
and determining what documentation will have a particular piece of information. Change
management for documentation should be used if possible.

What makes a good QA or Test manager?


A good QA, test, or QA/Test(combined) manager should:

•be familiar with the software development process.

14
•be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a
somewhat 'negative' process (e.g., looking for or preventing problems).

•be able to promote teamwork to increase productivity.

•be able to promote cooperation between software, test, and QA engineers.

•have the diplomatic skills needed to promote improvements in QA processes.

•have the ability to withstand pressures and say 'no' to other managers when quality is insufficient
or QA processes are not being adhered to.

•have people judgment skills for hiring and keeping skilled personnel.

•be able to communicate with technical and non-technical people, engineers, managers, and
customers..

•be able to run meetings and keep them focused.

What makes a good Software QA engineer?


The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able
to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand various
sides of issues are important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's
missing' is important for inspections and reviews.

What makes a good test engineer?


A good test engineer has a 'test to break' attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers, and an ability to communicate with both
technical (developers) and non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides a deeper understanding of the
software development process, gives the tester an appreciation for the developers' point of view,
and reduce the learning curve in automated test tool programming. Judgment skills are needed to
assess high-risk areas of an application on which to focus testing efforts when time is limited.

Why do we need to Test?


Defects can exist in the software, as it is developed by human who can make mistakes during the
development of software. However, it is the primary duty of a software vendor to ensure that
software delivered does not have defects and the customers day-to-day operations do not get
affected. This can be achieved by rigorously testing the software. The most common origin of
software bugs is due to:

• Poor understanding and incomplete requirements


• Unrealistic schedule

15
• Fast changes in requirements
• Too many assumptions and complacency

How will be the testcases for product testing . Provide an example of test plan
For product testing, the test plan includes more rigorous testing since most of these products are
off the self CD buys or net downloads.

Some of the common parameters in Testing must include


-------------------------------------------------------
1) Testing on Different Operating Systems
2) Installations done from CD ROM Drives with different machine configurations
3) Installations done from CD ROM Drives with different machine configurations with different
versions of Browsers and Software Service Packs
4) LICENSE KEY functionality
5) Eval Version checks and Full Version checks with reference to eval keys that would need to be
processed.

1. What we normally check for in the Database Testing?


In DB testing we need to check for,
1. The field size validation
2. Check constraints.
3. Indexes are done or not (for performance related issues)
4. Stored procedures
5. The field size defined in the application is matching with that in the db.

2. What is Database testing?


Data bas testing basically include the following.
1)Data validity testing.
2)Data Integrity testing
3)Performance related to data base.
4)Testing of Procedure, triggers and functions.

for doing data validity testing you should be good in SQL queries
For data integrity testing you should know about refer initial integrity and different constraint.
For performance related things you should have idea about the table structure and design.
for testing Procedure triggers and functions you should be able to understand the same

3. How to Test database in Manually? Explain with an example


Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows :
While adding a record through front-end check back-end that addition of record is effected or not.
So same for delete, update.
Ex: Enter employee record in database through front-end and check if the record is added or not to
the back-end(manually).

1. What criteria would you use to select Web transactions for load testing?

16
this again comes from voice of customer, which includes what are the very commonly used
transactions of the applications, we cannot load test all transactions , we need to understand the
business critical transactions , this can be done either talking.

2. For what purpose are virtual users created?


Virtual users are created to emulate real users.

3. Why it is recommended to add verification checks to your all your scenarios?


To verify the Functional flow....verification checks are used in the scenarios

4. In what situation would you want to parameterize a text verification check?


I think verification is the process done when the test results are sent to the developer, developer
fixes that and the rectification of the bugs. Then tester need to verification of the bugs which is
sent by him.

5. Why do you need to parameterize fields in your virtual user script?


need for parameterization is ,for e.g. test for inserting a record in table, which is having a primary
key field. the recorded vuser script tries to enter same record into the table for that many no of
vusers. but failed due to integrity constraint. in that situation we definitely need parameterization.

6. What are the reasons why parameterization is necessary when load testing the Web server
and the database server?
parameterization is done to check how your application performs the same operation with different
data.In load runner it is necessary to make a single user to refer the page for several times similar
in case of database server.

7. How can data caching have a negative effect on load testing results?
yes, data caching have a negative effect on load testing results, this can be altered according to the
requirments of the scenario in the run-time settings.

8. What usually indicates that your virtual user script has dynamic data that is dependent on
you parameterized fields?
Use the extended logging option of reporting.

9. What are the benefits of creating multiple actions within any virtual user s
Reusability. Repeatability, Reliability.

10. Load Testing - What should be analyzed.


To determine the performance of the system following objectives to be calculated.
1) Response time -: The time in which system responds to a transaction i.e., the time interval
between submission of request and receiving response.
2) Think time -: Time

11. What is the difference between Load testing and Performance Testing?
Performance testing verifies loads, volume and response time as defined by requirements while

17
load testing is testing an application under heavy loads to determine at what point the system
response time degrades.

What makes a good Software QA engineer?


The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able
to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand various
sides of issues are important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's
missing' is important for inspections and reviews.

When do you start developing your automation tests?


First, the application has to be manually tested. Once the manual testing is over and baseline is
established.

What is a successful product?


A bug free product, meeting the expectations of the user would make the product successful.

TestCases

Write Test Cases for testing ATM machine, Coffee Blending Machine, Telephone Handset?

Here the test cases should be in a organized way.

Coffee Machine Test Cases

1.verify the coffee machine is working properly or not by switching ON power supply.
2.verify the coffee machine when power supply is improper.
3.verify the machine that all buttons are visible.
4.verify the indicator light that the machine is turned ON after switching on power supply.
5.Verify the machine when there is no water.
6.verify the machine when there is no coffee powder.
7.Verify the machine when there is no milk.
10.Verify the machine when there is no sugar.
8.Verify the machine operation when it is empty.
9.Verify the machine operation when all the ingredients are upto the capacity level.
10.Verify the machine operation when water quantity is less than its limit.
11.Verify the machine operation when milk quantity is less than its capacity limit.
12.Verify the machine operation when coffee powder is less than its capacity limit.
13.verify the machine operation when sugar available is less than its capacity limit.
14.Verify the machine operation when there is metal piece is stuck inside the machine.
15.verify the machine by pressing the coffee button and
check it is pouring coffee with appropriate mixture and taste.
16.verify the machine by pressing the Tea button and check it is pouring Tea with appropriate
mixture and taste.
17.It should fill the coffee cup appropriately i,e quantity.

18
18.verify coffee machine operation with in seconds after pouring milk, sugar, water etc. It should
display message.
19.Verify all the buttons operation.
20.Verify all the machine operation by pressing the buttons simultaneously one after the other.
21.Verify the machine operation by pressing two buttons at a time.
22.verify the machine operation at the time power fluctuations.
23.Verify the machine operation when all the ingredients are overloaded.
24.Verify the machine operation when one of the ingredient is overloaded and others are upto limit.
25.Verify the machine operation when one or some of the parts inside the machine are damaged.

What are negative scenarios?


Testing to see whether the application is not doing what it is not suppose to do

What are positive scenarios?


Testing to see whether the application is doing what it is supposed to do.

In a web page, if two text boxes are there (one for Name Field another for Telephone no.),
supported by "Save" & "Cancel" button. Then derive some test cases.
What more information you need?
Here is a sample list of questions that u can ask
Field Validation i.e alphanumeric for Name and Numeric for telephone Number
enable/disabled
focus
Boundary conditions ( i.e what is the max length for name and telephone no.)
field size
GUI standards for the controls

Some Test cases can be as follows: (it should be in a managed way)


Whether it is taking a valid name entry.
Whether it is taking a valid telephone no. entry.
Whether it is taking a long telephone no. etc.

What are Individual test case and Workflow test case? Why we do workflow scenarios
An individual test is one that is for a single features or requirement. However, it is important that
related sequences of features be tested as well, as these correspond to units of work that user will
typically perform. It will be important for the system tester to become familiar with what users
intend to do with the product and how they intend to do it. Such testing can reveal errors that might
not ordinarily be caught otherwise. For example while each operations in a series might produce
the correct results it is possible that intermediate results get lost or corrupted between operations.

How do you determine what to test?


Depending upon the User Requirement document.

Have you ever written test cases or did you just execute those written by others?
Yes, I was involved in preparing and executing test cases in all the project.

19
What are the properties of a good requirement?
Understandable, Clear, Concise, Total Coverage of the application
What type of document do you need for QA, QC and testing?
Following is the list of documents required by QA and QC teams
Business requirements
SRS
Use cases
Test plan
Test cases

What is a good test case?


Accurate - tests what it’s designed to test
Economical - no unnecessary steps
Repeatable, reusable - keeps on going
Traceable - to a requirement
Appropriate - for test environment, testers
Self standing - independent of the writer
Self cleaning - picks up after itself

How to Write Better Test Cases


Test cases and software quality
Anatomy of a test case
Improving testability
Improving productivity
The seven most common mistakes
Case study

What's a 'test case'?


•A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain particulars
such as test case identifier, test case name, objective, test conditions/setup, input data requirements,
steps, and expected results.

•Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it's useful to prepare test cases early in the development cycle if
possible.

How will you check that your test cases covered all the requirements?
By using traceability matrix.
Traceability matrix means the matrix showing the relationship b/w the requirements & testcases.

for a triangle(sum of two sides is greater than or equal to the third side),what is the minimal
number of test cases required.
The answer is 3

20
1. Measure all sides of the triangle.
2. Add the minimum and second highest length of the triangle and store the result as Res.
3. Compare the Res with the largest side of the triangle.

Test Plan

What is UML and how it is used for testing?


The Unified Modeling Language (UML) is the industry-standard language for specifying,
visualizing, constructing, and documenting the artifacts of software systems. It simplifies the
complex process of software design, making a "blueprint" for construction. UML state charts
provide a solid basis for test generation in a form that can be easily manipulated. This technique
includes coverage criteria that enable highly effective tests to be developed. A tool has been
developed that uses UML state charts produced by Rational Software Corporation's Rational Rose
tool to generate test data.

What is good code?


These are some important qualities of good code
Cleanliness: Clean code is easy to read; this lets people read it with minimum effort so that they
can understand it easily.

Consistency: Consistent code makes it easy for people to understand how a program works; when
reading consistent code; one subconsciously forms a number of assumptions and expectations
about how the code works, so it is easier and safer to make modifications to it.

Extensibility: General-purpose code is easier to reuse and modify than very specific code with lots
of hard coded assumptions. When someone wants to add a new feature to a program, it will
obviously be easier to do so if the code was designed to be extensible from the beginning.

Correctness: Finally, code that is designed to be correct lets people spend less time worrying
about bugs and more time enhancing the features of a program.

What are the entrance and exit criteria in the system test?
Entrance and exit criteria of each testing phase is written in the master test plan.

Entrance Criteria:
-Integration exit criteria have been successfully met.

-All installation documents are completed.

-All shippable software has been successfully built

-Syate, test plan is baselined by completing the walkthrough of the test plan.

-Test environment should be setup.

-All severity 1 MR’s of integration test phase should be closed.

21
Exit Criteria:
-All the test cases in the test plan should be executed.

-All MR’s/defects are either closed or deferred.

-Regression testing cycle should be executed after closing the MR’s.

-All documents are reviewed, finalized and signed-off.

What is master test plan? What it contains? Who is responsible for writing it?
OR
What is a test plan? Who is responsible for writing it? What it contains.
OR
What's a 'test plan'? What did you include in a test plan?
A software project test plan is a document that describes the objectives, scope, approach, and focus
of a software testing effort. The process of preparing a test plan is a useful way to think through the
efforts needed to validate the acceptability of a software product. The completed document will
help people outside the test group understand the 'why' and 'how' of product validation. It should
be thorough enough to be useful but not so thorough that no one outside the test group will read it.
The following are some of the items that might be included in a test plan, depending on the
particular project:

•Title

•Identification of software including version/release numbers

•Revision history of document including authors, dates, approvals

•Table of Contents

•Purpose of document, intended audience

•Objective of testing effort

•Software product overview

•Relevant related document list, such as requirements, design documents, other test plans, etc.

•Relevant standards or legal requirements

•Trace ability requirements

•Relevant naming conventions and identifier conventions

•Overall software project organization and personnel/contact-info/responsibilities

22
•Test organization and personnel/contact-info/responsibilities

•Assumptions and dependencies

•Project risk analysis

•Testing priorities and focus

•Scope and limitations of testing

•Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system,
module, etc. as applicable

•Outline of data input equivalence classes, boundary value analysis, error classes

•Test environment - hardware, operating systems, other required software, data configurations,
interfaces
to other systems

•Test environment validity analysis - differences between the test and production systems and their
impact on test validity.

•Test environment setup and configuration issues

•Software migration processes

•Software CM processes

•Test data setup requirements

•Database setup requirements

•Outline of system-logging/error-logging/other capabilities, and tools such as screen capture


software, that
will be used to help describe and report bugs

•Discussion of any specialized software or hardware tools that will be used by testers to help track
the
cause or source of bugs

•Test automation - justification and overview

•Test tools to be used, including versions, patches, etc.

23
•Test script/test code maintenance processes and version control

•Problem tracking and resolution - tools and processes

•Project test metrics to be used

•Reporting requirements and testing deliverables

•Software entrance and exit criteria

•Initial sanity testing period and criteria

•Test suspension and restart criteria

•Personnel allocation

•Personnel pre-training needs

•Test site/location

•Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact
persons,
and coordination issues

•Relevant proprietary, classified, security, and licensing issues.

•Open issues

•Appendix - glossary, acronyms, etc.

The team-lead or a Sr. QA Analyst is responsible to write this document.

Why is test plan a controlled document?


Because it controls the entire testing process. Testers have to follow this test plan during the entire
testing process.

What's a 'test plan'?


A software project test plan is a document that describes the objectives, scope, approach, and focus
of a software testing effort. The process of preparing a test plan is a useful way to think through the
efforts needed to validate the acceptability of a software product. The completed document will
help people outside the test group understand the 'why' and 'how' of product validation. It should
be thorough enough to be useful but not so thorough that no one outside the test group will read it.
The following are some of the items that might be included in a test plan, depending on the
particular project:

24
•Title

•Identification of software including version/release numbers

•Revision history of document including authors, dates, approvals

•Table of Contents

•Purpose of document, intended audience

•Objective of testing effort

•Software product overview

•Relevant related document list, such as requirements, design documents, other test plans, etc.

•Relevant standards or legal requirements

•Traceability requirements

•Relevant naming conventions and identifier conventions

•Overall software project organization and personnel/contact-info/responsibilities

•Test organization and personnel/contact-info/responsibilities

•Assumptions and dependencies

•Project risk analysis

•Testing priorities and focus

•Scope and limitations of testing

•Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable

•Outline of data input equivalence classes, boundary value analysis, error classes

•Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems

•Test environment setup and configuration issues

•Test data setup requirements

25
•Database setup requirements

•Outline of system-logging/error-logging/other capabilities, and tools such as screen capture


software, that will be used to help describe and report bugs

•Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs

•Test automation - justification and overview

•Test tools to be used, including versions, patches, etc.

•Test script/test code maintenance processes and version control

•Problem tracking and resolution - tools and processes

•Project test metrics to be used

•Reporting requirements and testing deliverables

•Software entrance and exit criteria

•Initial sanity testing period and criteria

•Test suspension and restart criteria

•Personnel allocation

•Personnel pre-training needs

•Test site/location

•Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact
persons, and coordination issues

•Relevant proprietary, classified, security, and licensing issues.

•Open issues

•Appendix - glossary, acronyms, etc.

What's the big deal about 'requirements'?


One of the most reliable methods of insuring problems, or failure, in a complex software project is
to have poorly documented requirements specifications. Requirements are the details describing an
application's externally-perceived functionality and properties. Requirements should be clear,
complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would

26
be, for example, 'user-friendly' (too subjective). A testable requirement would be something like
'the user must enter their previously-assigned password to access the application'. Determining and
organizing requirements details in a useful and efficient way can be a difficult effort; different
methods are available depending on the particular project. Many books are available that describe
various approaches to this task.

Care should be taken to involve ALL of a project's significant 'customers' in the requirements
process. 'Customers' could be in-house personnel or out, and could include end-users, customer
acceptance testers, customer contract officers, customer management, future software maintenance
engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't
met should be included if possible.

Organizations vary considerably in their handling of requirements specifications. Ideally, the


requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design'
specifications should not be confused with 'requirements'; design specifications should be traceable
back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification
documents, in design documents, or in other documents at various levels of detail. No matter what
they are called, some type of documentation with detailed requirements will be needed by testers
in order to properly plan and execute tests. Without such documentation, there will be no clear-cut
way to determine if a software application is performing correctly.

5. GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the result of the
formula a+b/c-d where a=0.4*x, b=1.5*a, c=x, d=2.5*b; How many system test cases would
you write
GUI contains 2 fields

Field 1 to accept the value of x and

Field 2 displays the result

so that there is only one test case to write.

4. Lets say we have an GUI map and scripts, a we got some 5 new pages included inan
application how do we do that?
By integration testing.

3. Given an yahoo application how many test cases u can write?


First we need requirements of the yahoo application.
I think test cases are written against given requirements. So for any working web application or
new application, requirements are needed to prepare test cases. The number of test cases depends
on the requirements of the application

Note to learners : A Test Engineer must have knowledge on SDLC. I suggest learners to take any
one exiting application and start practice from writing requirements.

27
2. Complete Testing with Time Constraints : Question: How do you complete the testing
when you have a time constraint?
If i am doinf regression testing and i do not have sufficient time then we have to see for which sort
of regression testing i have to go
1)unit regression testing
2)Regional Regression testing
3)Full Regression testing.

Testing Scenarios : How do you know that all the scenarios for testing are covered?
By using the Requirement Traceability Matrix (RTM) we can ensure that we have covered all the
functionalities in Test Coverage.

RTM is a document that traces User Requirements from analysis through implementations. RTm
can be used as a completeness check to verify that all the requirements are present or that there is
no unnecessary/extra features and as a maintenance guide to new personnel.

We can use the simple format in Excel sheet where we map the Functionality with the Test case ID.

Bug Report

What is the role of a bug tracking system?


Bug tracking system captures, manages and communicates changes, issues and tasks, providing
basic process control to ensure coordination and communication within and across development
and content teams at every step..

Why you write MR?


MR is written for reporting problems/errors or suggestions in the software.

What is MR?
MR is a Modification Request also known as Defect Report, a request to modify the program so
that program does what it is supposed to do.

Low priority and high severity.


suppose you are having a bug that application crashes for a wrong use case. that only 1 in 1000
customers will be performing those steps and the application crashes. here the severity is very high
as the application crashes, but the priority to fix this bug is very less as this will effect only one
customer that to for a wrong use case.

High priority and Low severity.


suppose you are having a bug that there is a spelling mistake in the name of your project/product.
Here the severity is very less as this does not effect any thing as its just a spelling mistake. but the
priority to fix this bug is very high as if anyone catches this bug, the image of our product will
effect and customer will get bad impression. so this is of high
priority.

Severity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem.

28
In some companies, the defect reporter sets the severity and the triage team or product
management sets the priority. In a small company, or project (or product), particularly where there
aren't many defects to track, you can expect you don't really need both since a high severity defect
is also a high priority defect. But in a large company, and particularly where there are many
defects, using both is a form of risk management.

Major would be 1 and Trivial would be 3. You can add or multiply the two values together (there is
only a small difference in the outcome) and then use the event's risk value to determine how you
should address the problem. The lower values must be addressed and the higher values can wait.
Based on a military standard, MIL-STD-882.

They use a four-point severity rating (rather than three): Catastrophic; Critical; Marginal;
Negligible. They then use a five-point (rather than three) probability rating: Frequent; Probable;
Occasional; Remote; Improbable. Then rather than using a mathematical calculation to determine a
risk level, they use a predefined chart.

Blocker: This bug prevents developers from testing or developing the software.
Critical: The software crashes, hangs, or causes you to lose data.
Major: A major feature is broken.
Normal: It's a bug that should be fixed.
Minor: Minor loss of function, and there's an easy work around.
Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
Enhancement: Request for new feature or enhancement.

5 Level Error Classification Method

1. Catastrophic:
Defects that could (or did) cause disastrous consequences for the system in question.
E.g.) critical loss of data, critical loss of system availability, critical loss of
security, critical loss of safety, etc.

2. Severe:
Defects that could (or did) cause very serious consequences for the system in question.
E.g.) A function is severely broken, cannot be used and there is no workaround.

3. Major:
Defects that could (or did) cause significant consequences for the system in question - A
defect that needs to be fixed but there is a workaround.
E.g. 1.) losing data from a serial device during heavy loads.
E.g. 2.) Function badly broken but workaround exists

4. Minor:
Defects that could (or did) cause small or negligible consequences for the system in
question. Easy to recover or workaround.
E.g.1) Error messages misleading.
E.g.2) Displaying output in a font or format other than what the customer desired.

29
5. No Effect:
Trivial defects that can cause no negative consequences for the system in question. Such
defects normally produce no erroneous outputs.
E.g.1) simple typos in documentation.
E.g.2) bad layout or mis-spelling on screen.

What criteria you will follow to assign severity and due date to the MR?

Defects (MR) are assigned severity as follows:

Critical: show stoppers (the system is unusable)


High: The system is very hard to use and some cases are prone to convert to critical issues if not
taken care of.
Medium: The system functionality has a major bug but is not too critical but needs to be fixed in
order for the AUT to go to production environment.
Low: cosmetic (GUI related)

How do you help developer to track the fault s in the software?


By providing him with details of the defects which include the environment, test data, steps
followed etc… and helping him to reproduce the defect in his environment.

You find a bug and the developer says “It’s not possible” what do u do?
I’ll discuss with him under what conditions (working environment) the bug was produced. I’ll
provide him with more details and the snapshot of the bug.

What is the difference between exception and validation testing?


Validation testing aims to demonstrate that the software functions in a manner that can be
reasonably expected by the customer. Testing the software in conformance to the Software
Requirements Specifications.

Exception testing deals with handling the exceptions (unexpected events) while the AUT is run.
Basically this testing involves how to change the control flow of the AUT when an exception
arises.

What do you like about Windows?


Interface and User friendliness
Windows is one the best software I ever used. It is user friendly and very easy to learn.

What is a successful product?


A bug free product, meeting the expectations of the user would make the product successful.

How do you feel about cyclomatic complexity?


Cyclomatic complexity is a measure of the number of linearly independent paths through a
program module. Cyclomatic complexity is a measure for the complexity of code related to the

30
number of ways there are to traverse a piece of code. This determines the minimum number of
inputs you need to test all ways to execute the program.

Describe your experience with code analyzers?


Code analyzers generally check for bad syntax, logic, and other language-specific programming
errors at the source level. This level of testing is often referred to as unit testing and server
component testing. I used code analyzers as part of white box testing.

What is ODBC?
Open Database Connectivity (ODBC) is an open standard application-programming interface
(API) for accessing a database. ODBC is based on Structured Query Language (SQL) Call-Level
Interface. It allows programs to use SQL requests that will access databases without having to
know the proprietary interfaces to the databases. ODBC handles the SQL request and converts it
into a request the individual database system understands.

What is the role of a bug tracking system?


Bug tracking system captures, manages and communicates changes, issues and tasks, providing
basic process control to ensure coordination and communication within and across development
and content teams at every step..

Which MR tool you used to write MR?


Test Director
Rational ClearQuest.
PVCS Tracker

Why you write MR?


MR is written for reporting problems/errors or suggestions in the software.

What information does MR contain?


OR
Describe me to the basic elements you put in a defect report?
OR
What is the procedure for bug reporting?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested, and determinations made regarding requirements for regression
testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in
place, it should encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available.

The following are items to consider in the tracking process:

•Complete information such that developers can understand the bug, get an idea of its severity, and
reproduce it if necessary.

•Bug identifier (number, ID, etc.)

31
•Current bug status (e.g., 'Released for Retest', 'New', etc.)

•The application name or identifier and version

•The function, module, feature, object, screen, etc. where the bug occurred

•Environment specifics, system, platform, relevant hardware specifics

•Test case name/number/identifier

•One-line bug description

•Full bug description

•Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn't
have easy access to the test case/test script/test tool

•Names and/or descriptions of file/data/messages/etc. used in test

•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding
the cause of the problem

•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

•Was the bug reproducible?

•Tester name

•Test date

•Bug reporting date

•Name of developer/group/organization the problem is assigned to

•Description of problem cause

•Description of fix

•Code section/file/module/class/method that was fixed

•Date of fix

•Application version that contains the fix

32
•Tester responsible for retest

•Retest date

•Retest results

•Regression testing requirements

•Tester responsible for regression tests

•Regression testing results

How defeat tracking is use


Used for assign bugs to the development team and it pops the developer to check the error.

How can it be known when to stop testing?


This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends.

What should be done after a bug is found?


The bug needs to be communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested, and determinations made regarding requirements for regression
testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in
place, it should encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available (see the 'Tools' section for web resources with
listings of such tools). The following are items to consider in the tracking process:

•Complete information such that developers can understand the bug, get an idea of it's severity, and
reproduce it if necessary.

•Bug identifier (number, ID, etc.)

•Current bug status (e.g., 'Released for Retest', 'New', etc.)

•The application name or identifier and version

•The function, module, feature, object, screen, etc. where the bug occurred

•Environment specifics, system, platform, relevant hardware specifics

33
•Test case name/number/identifier

•One-line bug description

•Full bug description

•Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn't have easy access to the test case/test script/test tool

•Names and/or descriptions of file/data/messages/etc. used in test

•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem

•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

•Was the bug reproducible?

•Tester name

•Test date

•Bug reporting date

•Name of developer/group/organization the problem is assigned to

•Description of problem cause

•Description of fix

•Code section/file/module/class/method that was fixed

•Date of fix

•Application version that contains the fix

•Tester responsible for retest

•Retest date

•Retest results

•Tester responsible for regression tests

•Regression testing results

34
A reporting or tracking process should enable notification of appropriate personnel at various
stages. For instance, testers need to know when retesting is needed, developers need to know when
bugs are found and how to get the needed information, and reporting/summary capabilities are
needed for managers

Different types of defects


- Open Defects - The list of defects remaining in the defect tracking system with a status of Open.
Technical Support has access to the system, so a report noting the defect ID, the problem area, and
title should be sufficient.

- Deferred Defects - The list of defects remaining in the defect tracking system with a status of
deferred. Deferred means the technical product manager has decided not to address the issue with
the current release.

- Pending Defects - The list of defects remaining in the defect tracking system with a status of
pending. Pending refers to any defect waiting on a decision from a technical product manager
before a developer addresses the problem.

- Fixed Defects - The list of defects waiting for verification by QA.

- Closed Defects - The list of defects verified as fixed by QA during the project cycle.
The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed
by the QA Process Manager during the QA Process Review Meeting and is provided to the Release
Board and Technical Support.

Bug Reports - Other Considerations


* If your bug is randomly reproducible, just mention it in your bug report. But don’t forget to file
it. You can always add the exact steps to reproduce anytime later you (or anyone else) discover
them. This will also come to your rescue when someone else reports this issue, especially if it’s a
serious one.
* Mention the error messages in the bug report, especially if they are numbered. For example,
error messages from the database.
* Mention the version numbers and build numbers in the bug reports.
* Mention the platforms on which the issue is reproducible. Precisely mention the platforms on
which the issue is not reproducible. Also understand that there is difference between the issue
being not reproducible on a particular platform and it not being tested on that platform. This might
lead to confusion.
* If you come across several problems having the same cause, write a single bug report. The fix
of the problem will be only one. Similarly, if you come across similar problems at different
locations requiring the same kind of fix but at different places, write separate bug reports for each
of the problems. One bug report for only one fix.
* If the test environment on which the bug is reproducible is accessible to the developers,
mention the details of accessing this setup. This will help them save time to setting up the
environment to reproduce your bug.
* Under no circumstances should you hold on to any information regarding the bug.

35
Unnecessary iterations of the bug report between the developer and the tester before being fixed is
just waste of time due to ineffective bug reporting.

How to Write Effective Bug Reports

The Purpose Of A Bug Report


When we uncover a defect, we need to inform the developers about it. Bug report is a medium of
such communication. The primary aim of a bug report is to let the developers see the failure with
their own eyes. If you can't be with them to make it fail in front of them, give them detailed
instructions so that they can make it fail for themselves. The bug report is a document that explains
the gap between the expected result and the actual result and detailing on how to reproduce the
scenario.
After Finding The Defect

* Draft the bug report just when you are sure that you have found a bug, not after the end of test
or end of day. It might be possible that you might miss out on some point. Worse, you might miss
the bug itself.
* Invest some time to diagnose the defect you are reporting. Think of the possible causes. You
might land up uncovering some more defects. Mention your discoveries in your bug report. The
programmers will only be happy seeing that you have made their job easier.
* Take some time off before reading your bug report. You might feel like re-writing it.

Defect Summary
The summary of the bug report is the reader’s first interaction with your bug report. The fate of
your bug heavily depends on the attraction grabbed by the summary of your bug report. The rule is
that every bug should have a one-liner summary. It might sound like writing a good attention-
grabbing advertisement campaign. But then, there are no exceptions. A good summary will not be
more than 50-60 characters. Also, a good summary should not carry any subjective representations
of the defect.
The Language

* Do not exaggerate the defect through the bug report. Similarly, do not undertone it.
* However nasty the bug might be, do not forget that it’s the bug that’s nasty, not the
programmer. Never offend the efforts of the programmer. Use euphemisms. 'Dirty UI' can be made
milder as 'Improper UI'. This will take care that the programmer's efforts are respected.
* Keep It Simple & Straight. You are not writing an essay or an article, so use simple language.
* Keep your target audience in mind while writing the bug report. They might be the developers,
fellow testers, managers, or in some cases, even the customers. The bug reports should be
understandable by all of them.

Steps To Reproduce
* The flow of the Steps To Reproduce should be logical.
* Clearly list down the pre-requisites.
* Write generic steps. For example, if a step requires the user to create file and name it, do not
ask the user to name it like "Mihir's file". It can be better named as "Test File".
* The Steps To Reproduce should be detailed. For example, if you want the user to save a

36
document from Microsoft Word, you can ask the user to go to File Menu and click on the Save
menu entry. You can also just say "save the document". But remember, not everyone will not know
how to save a document from Microsoft Word. So it is better to stick to the first method.
* Test your Steps To Reproduce on a fresh system. You might find some steps that are missing,
or are extraneous.

Test Data
Strive to write generic bug reports. The developers might not have access to your test data. If the
bug is specific to a certain test data, attach it with your bug report.

Screenshots
Screenshots are a quite essential part of the bug report. A picture makes up for a thousand words.
But do not make it a habit to unnecessarily attach screen shots with every bug report. Ideally, your
bug reports should be effective enough to enable the developers to reproduce the problem. Screen
shots should be a medium just for verification.

* If you attach screen shots to your bug reports, ensure that they are not too heavy in terms of
size. Use a format like jpg or gif, but definitely not bmp.
* Use annotations on screen shots to pin-point at the problems. This will help the developers to
locate the problem at a single glance.

Severity / Priority
* The impact of the defect should be thoroughly analyzed before setting the severity of the bug
report. If you think that your bug should be fixed with a high priority, justify it in the bug report.
This justification should go in the Description section of the bug report.
* If the bug is the result of regression from the previous builds/versions, raise the alarm. The
severity of such a bug may be low but the priority should be typically high.

Logs
Make it a point to attach logs or excerpts from the logs. This will help the developers to analyze
and debug the system easily. Most of the time, if logs are not attached and the issue is not
reproducible on the developer's end, they will revert to you asking for logs.

If the logs are not too large, say about 20-25 lines, you can paste it in bug report. But if it is large
enough, add it to your bug report as an attachment, else your bug report will look like a log.

2. Top Ten Tips for Bug Tracking


1. A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is
extremely helpful for the programmer who has to find the bug.

2. Remember that the only person who can close a bug is the person who opened it in the first
place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they
saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't
fix, postponed, not repro, duplicate, or by design.

37
4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when
the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers
should have a build ID number so that the poor tester doesn't have to retest the bug on a version of
the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just
don't accept bug reports by any other method. If your testers are used to sending you email with
bug reports, just bounce the emails back to them with a brief message: "please put this in the bug
database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just
don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start
assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great
expense, start assigning new features to people using bugs. A bug database is also a great
"unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will
come up with a great idea for a new field to put in the database. You get all kinds of clever ideas,
for example, keeping track of the file where the bug was found; keeping track of what % of the
time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of
which exact versions of which DLLs were installed on the machine where the bug happened. It's
very important not to give in to these ideas. If you do, your new bug entry screen will end up with
a thousand fields that you need to supply, and nobody will want to input bug reports any more. For
the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much
work, people will go around the bug database.

What are the different types of Bugs we normally see in any of the Project? Include the
severity as well.
The Life Cycle of a bug in general context is:

Bugs are usually logged by the development team (While Unit Testing) and also by testers (While
system or other type of testing).

So let me explain in terms of a tester's perspective:

A tester finds a new defect/bug, so using a defect tracking tool logs it.

1. Its status is 'NEW' and assigns to the respective dev team (Team lead or Manager). 2. Th
e team lead assign's it to the team member, so the status is 'ASSIGNED TO'

38
3. The developer works on the bug fixes it and re-assigns to the tester for testing. Now the status is
'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if its fixed he changes the status to 'VERIFIED'
5. If the tester has the authority (depends on the company) he can after verifying change the status
to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.

6. If the defect is not fixed he re-assign's the defect back to the dev team for re-fixing.

This is the life cycle of a bug.

1. User Interface Defects - Low


2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Improper Service Levels (Control flow defects) - High
6. Interpreting Data Defects - High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) - High
9. Hardware Failures:- High

39

You might also like