You are on page 1of 10

MANUAL TESTING INTERVIEW QUESTION PROFESSIONAL

1. How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box
testing (where an understanding of the internal design of the application is unnecessary),
white-box testing can be oriented to the applications objects. If the application was well-
designed this can simplify test design.

2. What is Extreme Programming and whats it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-
prone projects with unstable requirements. It was created by Kent Beck who described
the approach in his book Extreme Programming Explained (See the Softwareqatest.com
Books page.). Testing (extreme testing) is a core aspect of Extreme Programming.
Programmers are expected to write unit and functional test code first - before the
application is developed. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help
develop scenarios for acceptance/black box testing. Acceptance tests are preferably
automated, and are modified and rerun for each of the frequent development iterations.
QA and test personnel are also required to be an integral part of the project team.
Detailed requirements documentation is not used, and frequent re-scheduling, re-
estimating, and re-prioritizing is expected. For more info see the XP-related listings in the
Softwareqatest.com Other Resources section.

3. Will automated testing tools make testing easier?

Possibly. For small projects, the time needed to learn and implement them may not be
worth it. For larger projects, or on-going long-term projects they can be valuable. A
common type of automated tool is the record/playback type. For example, a tester could
click through all combinations of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them recorded and the results logged by a tool. The recording
is typically in the form of text based on a scripting language that is interpretable by the
testing tool. If new buttons are added, or some underlying code in the application is
changed, etc. the application might then be retested by just playing back the recorded
actions, and comparing the logging results to check effects of the changes. The problem
with such tools is that if there are continual changes to the system being tested, the
recordings may have to be changed so much that it becomes very time-consuming to
continuously update the scripts. Additionally, interpretation and analysis of results
(screens, data, logs, etc.) can be a difficult task.

4. If I give some thousand tests to execute in 2 days what do u do?

If possible, we will automate or else, execute only the test cases which are mandatory.
5. What is the difference between Product-based Company and Projects-based Company?

Product based company develops the applications for Global clients i.e. there is no
specific clients. Here requirements are gathered from market and analyzed with experts.

Project based company develops the applications for the specific client. The requirements
are gathered from the client and analyzed with the client.

6. What are cookies? Tell me the advantage and disadvantage of cookies?

Cookies are messages that web servers pass to your web browser when you visit Internet
sites. Your browser stores each message in a small file. When you request another page
from the server, your browser sends the cookie back to the server. These files typically
contain information about your visit to the web page.

7. What is Six Sigma? Explain?

A quality discipline that focuses on product and service excellence to create a culture
that demands perfection on target, every time. Six Sigma quality levels produce
99.9997% accuracy, with only 3.4 defects per million opportunities. Six Sigma is
designed to dramatically upgrade a companys performance, improving quality and
productivity.

8. How to generate data set testing or how to do intensive data set testing?

Taking a stab...a dataset in IBM mainframe technology is just a file. So this testing could
be ensuring that files are created and referred to correctly. If they're talking about GDG's
(generation data group) files, then ensure that proper versions are being created and
referred to. e.g. previous files are -1, current is 0 and next is +1

A large job will have many datasets and they have to be work correctly, especially if
being re-run or re-started.

9. What kind of metrics would you use to describe the results of a simple load test?

Use metrics that can put context around the numbers that people often want to see (like
response time) because that context is often what's more determinative of what's going on
with your application (and its environment) than just saying you could process 1,000
transactions every 20 seconds with a steady load of 10 unique visitors. Use distribution-
type metrics even for a simple test

10. Smoke test and spot check are same or different?

A spot check is really just an easy word for sampling. It's when you check something in
samples. In products, that might be checking a few produced elements on an assembly
line. In people or operations, that might be visiting a few areas of a worksite (say a
warehouse) and seeing what happens there. In an application, this might be checking a
few areas of the application just to see what you can see. The notion of sampling is that
the checks you carry out are, for the most part, random or, at the very least, not entirely
systematic.

Smoke test is done to assure that a bare minimum of functionality is present in order to
guarantee that further testing is worthwhile. What "bare minimum" means depends on the
people doing the testing and the nature of the functionality and/or application. (In non-
software contexts, smoke testing usually means enough testing to prove that something
will not immediately -- or very quickly -- fail and/or fall apart in some catastrophic way.
In electronics, the term is usually literal: see if something starts to produce smoke when
power is applied to it.)

11. How do you conduct boundary analysis for OK push button?

Boundary analysis for an OK button - first of all work out the boundary between when it
works and when it should not, for example of you entered the word OK in a text box you
would test of O and OKK etc. It is no different in this case, assuming it should work with
a mouse you would click on the edge of the button, in the middle of the button and just
outside, checking that the area of the button is correct. You would check that if you
pushed the button but then moved the mouse away without letting the button up does it
do what the requirement says, etc.

12. What exactly the Automation frame work is?

A framework is basically a base design for an application that can be re-used across
multiple projects. This, usually, makes it something that will not function independently,
but it is used to support other, larger applications. If you refer to .Net Framework, for
instance, there are pre-existing code libraries and stuff which means you are not required
to re-invent the wheel every single time you start a new project.

In the case of an automation framework, you might include libraries which will handle
button clicks, combo boxes, co-ordinates to click on, etc. These would all be functions
within your framework that you could, then, call by passing it certain elements to have it
perform an action against the application. So in the above example, the automation
framework is designed to minimize the time, from start to finish, it takes to produce an
automated test suite

MANUAL TESTING INTERVIEW QUESTION PROFESSIONAL


13. How to test an application (For example Window Service), that has no user interface?

Developers create an API that allows a tester to put in inputs and examine outputs from
the system. Examine the database for values before and after input to the system.
14. What makes Testing complicated?

1. Bad, missing, changing requirements.


2. Poor estimating.
3. Marketing promising something great to hit the supposed marketing opportunity
window. ... and so on..

15. What is the purpose of preparing a Test Strategy & Test plan when most of the
information regarding the project is available in the Project Plan?

Project plans usually involve documenting how the feature is supposed to work and how
they think it will be used. The test plan will get into much more detail about alternate
paths, negative tests, etc. It may also include things such as how to set up the test
environment, how to insert/clear test data, etc.

16. What does GOOD requirement include?

Good requirements should include functionalities and non-functionalities, and describe


each aspect clearly - clarity.

17. Tell me about strategy in testing search engine?

First of all, the requirements must be defined. -

If it is web-based, does it have to run on every browser or, in the other extreme, can you
select IE version X.Y and tell the user that if they run another browser, they do so on
their own risk?

Are you only testing the functionality or is the performance also important? If so, how
many users are likely to attempt to access the system at any given time? What is the
expected behavior at peak times (a shopping search system shortly before Christmas;
what is the expected behavior? System must work correctly and give results within 2
seconds in 99% of all cases, System must react and display that we apologize, but your
request could not be handled or is it acceptable that the session is suddenly killed)?

-If the system is already in use, try to get the support hotline and ask them about user
complains - that provides excellent hints for tests. –

- What resources (manpower, money for test tools...) are available?

- What sort of testing is to be done? For example, you can't do a UAT; you can only set a
rough test plan for a UAT.

- What test data is available or do you have to define it yourself?


- Do we talk about a search engine that is going to be developed in the future, that
is in the state of being developed, that is released in one month and the
management just thought that it might be nice to have some sort of QA or that
was released and is currently tested by the customers?

18. What is page rendering?

Page rendering is the process of interpreting the instructions which describe the intended
page, and displaying it on the output device.

19. What is bug density?

Its the number of bugs found per some lines of the codes.

20. In Equivalence partitioning Classes. Equivalence stands for...? Partitioning stands


for...? And Classes Stand for...?

Equivalence means that they have the same effect on your application-under-test ("they
are Equivalent"). Using any of these values, you would expect to see the same results.

Partitioning means to divide the values into groups ("to Partition them").

Classes means groups of equivalent values ("Classes of values").

So Equivalence Class Partitioning means to divide the entire set of test values into
groups, such that any one of the values in each group could be used for your test.

Your applications have functionalities that are not documented? Or do you mean what
other types of testing would you do? If it is the second option then I would do usability,
accessibility, concurrency, compatibility, performance, resolution etc . All depends on the
type of application and the quality expectations

22. How can u tell which code is efficient when given same with slight variation of same
functionalities?

We can judge by following check:

1. 22 On a run of the functions time them.


2. 23 Check the amount of data that is stored in memory and that is stored in a
database.
3. 24 Check the amount of output each function has to a database, is it all required?
4. 25 Check the tables it is writing to, are they properly indexed (lots of other stuff
to check in db's but not my area).
5. 26 Check the timing of each function concurrently and under load.
6. 27 Check the timing as close to possible to the real world situation as possible.
23. Difference between testing web applications and desktop applications?

While testing web applications, we need to take into account different browsers,
resolutions, web standards for accessibility etc But in case of desktop application not.

24. What technique one would use for validating a phone no. field. Requirement

The field should accept only 13 characters


The first 3 characters should be +91 & then the num.

By Using BVA, you can derive the step or feature to be tested .For example:

Zero
12 chars
13 chars
14 chars

Blank
+ (with 12 chars after)
+9 (with 11 chars after)
+91 (with 9 chars after)
+91 (with 10 chars after)
+91 (with 11 chars after)

25. What is a Test Procedure & what is the difference between test procedure & test
case?

Test procedure is a detailed document for environment setup and execution for given test
cases.

26. How do you usually test an application?

Its a quite complex process and it also depends on which stage the application is at.
Different development stage needs different test strategies. Generally, I will firstly obtain
and review requirements, specifications and other necessary documents, including budget
and schedule requirements, then determine project-related personnel and their
responsibilities, reporting requirements, required standards and processes, project context
relative to the existing quality culture of the organization and business, identify
application's higher-risk aspects, set priorities, determine scope and limitations of tests,
determine test approaches and methods, including unit, integration, functional, system,
load, usability tests, determine test environment requirements, test tool requirements and
test input data requirements, identify tasks and manpower requirements, and set schedule
estimates, timelines, milestones, and prepare test plan document and have needed reviews
and approvals, write test cases, prepare test environment and test tool, set up test tracking
processes, set up logging and archiving processes, obtain test input data, install software
releases, perform tests according to plan and case, evaluate and report results, track bugs
and fixes, retest as needed, maintain and update test plans, test cases, test environment,
and test tool through life cycle.

27. Is there a difference between QA and testing? What is that difference?

Affirmatively. QA is oriented to PREVENTION, while Testing is oriented to


DETECTION. QA involves the entire software development process, including
monitoring and improving the process, making sure that any agreed-upon standards and
procedures are followed, and ensuring that problems are found and dealt with. Testing
should intentionally attempt to make things go wrong to determine if things happen when
they shouldn't or things don't happen when they should

28. How do you analyze the results of your tests what metrics do you provide, if any?

We can use a special method named IN statistics, which means Instability Number. We
define different scores to different severities, for example, 20 for A, 10 for B, 5 for C, 2
for D. If there are still M open bugs at level A, N open bugs at level B, X open bugs at
level C and Y open bugs at level D, we could get the sum of IN, which equals
A*M+B*N+C*X+D*Y. And we could get daily IN as well, by which, we could draw IN
curve diagram and see whether bug rate matches the SDLC standard or not and make out
next plan for improvement.

30. What kind of testing tools do you prefer and why? (Front end GUI record/playback or
back end services integration)

It depends, and its not a simple decision for me to make to choose Robot, WinRunner,
SilkTest or QARun for front end GUI and Web Application Stress, LoadRunner,
SilkPerformer or QALoad for back end services integration. It is important to evaluate
the tool based on our system-engineering environment, available budget, schedules, skills
available, and other testing needs. Some tools work better in specific environments, while
in another environment a tool can cause compatibility problems.

31. If the time is very limited for the testing then what would u test in the application.

a. Execute those test cases which include Critical Functionalities, which you think must
me test before any release.

b. Test complete functional flow once which includes not so detailed but the cursory
check of all the functionalities.

c. Test those modules which are being affected or integrated with new
change/functionality.

d. Perform Ad-hoc testing if you get some time remaining.


32. What Documents are required for preparing a Test Plan and Test cases?

For Test Plan - PRD, SRS, FRS

For Test Cases - FRS, Use Case

33. What types of testing we can conduct on login page?

Assuming the login page consists of username and password fields, the testing starts
with various combinations of the same like incorrect username & correct password,
correct username & incorrect password etc. and we can do GUI, security and the
Functional Testing on the same.

34. What do you mean by Test Data? How many Types of Test Data are there?

Test data is the data which helps us to execute the test cases with various input details.

Environmental data:

Environmental data tells the system about its technical environment. It includes
communications addresses, directory trees and paths and environmental variables. The
current date and time can be seen as environmental data.

Setup data:

Setup data tells the system about the business rules. It might include a cross reference
between country and delivery cost or method, or methods of debt collection from
different kinds of customers. Typically, setup data causes different functionality to apply
to otherwise similar data. With an effective approach to setup data, business can offer
new intangible products without developing new functionality - as can be seen in the
mobile phone industry, where new billing products are supported and indeed created by
additions to the setup data.

Input data:

Input data is the information input by day-to-day system functions. Accounts, products,
orders, actions, documents can all be input data. For the purposes of testing, it is useful to
split the categorization once more:

1. FIXED INPUT DATA: Fixed input data is available before the start of the test,
and can be seen as part of the test conditions.
2. CONSUMABLE INPUT DATA Consumable input data forms the test input. It
can also be helpful to qualify data after the system has started to use it;
3. TRANSITIONAL DATA Transitional data is data that exists only within the
program, during processing of input data. Transitional data is not seen outside the
system (arguably, test handles and instrumentation make it output data), but its
state can be inferred from actions that the system has taken. Typically held in
internal system variables, it is temporary and is lost at the end of processing.

Output data:

Output data is all the data that a system outputs as a result of processing input data and
events. It generally has a correspondence with the input data (cf. Jackson's Structured
Programming methodology), and includes not only files, transmissions, reports and
database updates, but can also include test measurements. A subset of the output data is
generally compared with the expected results at the end of test execution. As such, it does
not directly influence the quality of the tests.

35. What are Entry and Exist criteria in test plan?

Entry criteria - What should or must be true for the project to continue into the next
phase.

Exit Criteria - What should or must be true of the work in the current phase to be
considered complete. Or

Entry Criteria

A set of decision-making guidelines used to determining whether a system under test is


ready to move into, or enter, a particular phase of testing. Entry criteria tend to become
more rigorous as the test phases progress.

Exit Criteria

A set of decision-making guidelines used to determining whether a system under test is


ready to exit a particular phase of testing. When exit criteria are met, either the system
under test moves on to the next test phase or the test project is considered complete. Exit
criteria tend to become more rigorous as the test phases progress.

36. How to validate quality of an application?

The following points should be kept in mind while validating an application:

1. Number of requirements implemented should match the number of requirements


provided
2. No "critical/[stupid is as stupid does] bugs
3. No more than n numbers of "High" bugs
4. List of "High" bugs
5. Number of "non-critical/med/low" bugs
6. List of "non-critical/med/low" bugs
7. Number of test cases executed per component
1. What testing areas have not been covered
2. Number of test cases passed
3. Number of test cases failed
4. Readiness of testing environments (environment-lab, hardware, software)
5. Have an internal QA person assigned to the project as the QA liaison
6. Have an internal review of test strategy, test plan and test scripts
7. Internal QA audit results by taking a random selection of x% of the test cases
after they have been completed and rerunning them to confirm results match
8. Internal QA must be included in all meetings/requirements reviews/demos/etc as
they will have to eventually take this thing over
9. If automation is being used they will need to work on the same tools as we use
and adhere to our framework.

37. What is ROM in estimation?

A ROM is a rough order of magnitude (ROM). A ROM estimate is based on high-level


objectives, provides a bird's-eye view of the project deliverables, and has lots of wiggle
room. Most ROM estimates, depending on the industry, have a range of variance from
25% all the way to +75%.

38. How do you estimate time for test case preparation?

Go through the requirement documents quickly, identifying most of the P1 and P2


scenarios. Now depending on the complexity of the test script that needs to be written for
the scenario we come out with the rough estimate. Add to this 2 or 3 days for the
scenarios that have not been identified plus the time for UAT test support. This will give
you the estimated time for testing e.g. if you had identified that for testing a functionality,
you need to write 10 P1 and 15 P2 scripts and taking the time taken to write a script on
average as 4 hours, the rough estimate would be 10*4 + 15*4 = 100 hours or 12.5 person
days. Now add to this 3 days as contingency margin. So the estimate will be 15.5 days.

The time required for UAT test support will be different from this and will depend on the
number of defects raised in UAT.

39. Difference between Time and Effort Estimation?

Time estimation: It is the time (in hours) for each task/functional point.

Effort Estimation: It is the total bandwidth each engineer has dedicated towards a project
or task within the same project.

You might also like