Professional Documents
Culture Documents
A: Each of the followings represents a different testing approach: black box testing, white box
testing, unit testing, incremental testing, integration testing, functional testing, system testing,
end-to-end testing, sanity testing, regression testing, acceptance testing, load testing,
performance testing, usability testing, install/uninstall testing, recovery testing, security testing,
compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison
testing, alpha testing, beta testing, and mutation testing.
Q2: What is stress testing?
A: Stress testing is testing that investigates the behavior of software (and hardware) under
extraordinary operating conditions.
For example, when a web server is stress tested, testing aims to find out how many users can be
on-line, at the same time, without crashing the server. Stress testing tests the stability of a given
system or entity.
Stress testing tests something beyond its normal operational capacity, in order to observe any
negative results. For example, a web server is stress tested, using scripts, bots, and various
denial of service tools.
Q3: What is load testing?
A: Load testing simulates the expected usage of a software program, by simulating multiple users
that access the program's services concurrently. Load testing is most useful and most relevant for
multi-user systems, client/server models, including web servers.
For example, the load placed on the system is increased above normal usage patterns, in order
to test the system's response at peak loads.
Q4: What is the difference between stress testing and load
testing?
A: Load testing generally stops short of stress testing.
During stress testing, the load is so great that the expected results are errors, though there is
gray area in between stress testing and load testing.
Load testing is a blanket term that is used in many different ways across the professional
software testing community.
The term, load testing, is often used synonymously with stress testing, performance testing,
reliability testing, and volume testing.
Q5: What is the difference between performance testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
Q6: What is the difference between reliability testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
Q7: What is automated testing?
A: Automated testing is a formally specified and controlled method of formal testing approach.
Then, (and this is called second stage of alpha testing), the software is handed over to software
QA staff for additional testing in an environment that is similar to the intended use.
Q12: What is beta testing?
A: Following alpha testing, "beta versions" of the software are released to a group of people, and
limited public tests are performed, so that further testing can ensure the product has few bugs.
Other times, beta versions are made available to the general public, in order to receive as much
feedback as possible. The goal is to benefit the maximum number of future users.
Q13: What is the difference between alpha and beta testing?
A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is
performed by the public, a few select prospective customers, or the general public.
Q14: What is gamma testing?
A: Gamma testing is testing of software that has all the required features, but it did not go through
all the in-house quality checks. Cynics tend to refer to software releases as "gamma testing".
Q15: What is boundary value analysis?
A: Boundary value analysis is a technique for test data selection. A test engineer chooses values
that lie along data extremes. Boundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a
systems works correctly for these extreme or special values, then it will work correctly for all
values in between. An effective way to test code, is to exercise it at its natural boundaries.
Q16: What is ad hoc testing?
A: Ad hoc testing is a testing approach; it is the least formal testing approach.
Q17: What is clear box testing?
A: Clear box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q18: What is glass box testing?
A: Glass box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q19: What is open box testing?
A: Open box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q20: What is black box testing?
A: Black box testing a type of testing that considers only externally visible behavior. Black box
testing considers neither the code itself, nor the "inner workings" of the software.
Q21: What is functional testing?
A: Functional testing is the same as black box testing. Black box testing a type of testing that
considers only externally visible behavior. Black box testing considers neither the code itself, nor
the "inner workings" of the software.
Q22: What is closed box testing?
A: Closed box testing is the same as black box testing. Black box testing a type of testing that
considers only externally visible behavior. Black box testing considers neither the code itself, nor
the "inner workings" of the software.
Q23: What is bottom-up testing?
A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test
drivers for components that have not yet been developed, because, with bottom-up testing, low-
level components are tested first. The objective of bottom-up testing is to call low-level
components, for testing purposes.
Q24: What is software quality?
A: The quality of the software does vary widely from system to system. Some common quality
attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO
9126 for more information on this subject.
Communication skills and the ability to understand various sides of issues are important. A QA
engineer is successful if people listen to him, if people use his tests, if people think that he's
useful, and if he's happy doing his work.
I would love to see QA departments staffed with experienced software developers who coach
development teams to write better code. But I've never seen it. Instead of coaching, QA engineers
tend to be process people.
A software test case template is, for example, a 6-column table, where column 1 is the "Test case
ID number", column 2 is the "Test case name", column 3 is the "Test objective", column 4 is the
"Test conditions/setup", column 5 is the "Input data requirements/steps", and column 6 is the
"Expected results".
All documents should be written to a certain standard and template. Standards and templates
maintain document uniformity. It also helps in learning where information is located, making it
easier for a user to find what they want. Lastly, with standards and templates, information will not
be accidentally omitted from a document.
We also give your company the evidence that the software is correct and operates properly.
We, test engineers, improve problem tracking and reporting, maximize the value of the software,
and the value of the devices that use it.
We, test engineers, assure the successful launch of the product by discovering bugs and design
flaws, before users get discouraged, before shareholders loose their cool and before employees
get bogged down.
We, test engineers, help the work of the software development staff, so the development team
can devote its time to build up the product.
We provide documentation required by FDA, FAA, other regulatory agencies, and your
customers.
We, test engineers, save your company money by discovering defects EARLY in the design
process, before failures occur in production, or in the field. We save the reputation of your
company by discovering bugs and design flaws, before bugs and design flaws damage the
reputation of your company.
What we CAN do is to detect lack of quality, and prevent low-quality products from going out the
door. What is the solution? We need to drop the QA label, and tell the developers that they are
responsible for the quality of their own work. The problem is, sometimes, as soon as the
developers learn that there is a test department, they will slack off on their testing. We need to
offer to help with quality assessment, only.
On the negative side, statistical process control works only with processes that are sufficiently
well defined AND unvaried, so that they can be analyzed in terms of statistics. The problem is,
most software development projects are NOT sufficiently well defined and NOT sufficiently
unvaried.
On the positive side, one CAN use statistics. Statistics are excellent tools that project managers
can use. Statistics can be used, for example, to determine when to stop testing, i.e. test cases
completed with certain percentage passed, or when bug rate falls below a certain level. But, if
these are project management tools, why should we label them quality assurance tools?
McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module
design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity
metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object
integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity
metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data
reference metric (TDR), maintenance severity metric (maint_severity), data reference severity
metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric
(gdv_severity)
Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM),
number of children (NOC), response for a class (RFC), weighted methods per class (WMC),
Halstead software metrics program length, program volume, program level and program difficulty,
intelligent content, programming effort, error estimate, and programming time.
Line count software metrics: lines of code, lines of comment, lines of mixed code and
comments, and lines left blank.
Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable/acceptable based on client input.
McCabe metrics: Cyclomatic complexity metric (v(G)), Actual complexity metric (AC), Module
design complexity metric (iv(G)), Essential complexity metric (ev(G)), Pathological complexity
metric (pv(G)), design complexity metric (S0), Integration complexity metric (S1), Object
integration complexity metric (OS1), Global data complexity metric (gdv(G)), Data complexity
metric (DV), Tested data complexity metric (TDV), Data reference metric (DR), Tested data
reference metric (TDR), Maintenance severity metric (maint_severity), Data reference severity
metric (DR_severity), Data complexity severity metric (DV_severity), Global data severity metric
(gdv_severity).
McCabe object oriented software metrics: Encapsulation percent public data (PCTPUB), and
Access to public data (PUBDATA), Polymorphism percent of unoverloaded calls (PCTCALL),
Number of roots (ROOTCNT), Fan-in (FANIN), quality maximum v(G) (MAXV), Maximum ev(G)
(MAXEV), and Hierarchy quality(QUAL).
Other object oriented software metrics: Depth (DEPTH), Lack of cohesion of methods
(LOCM), Number of children (NOC), Response for a class (RFC), Weighted methods per class
(WMC), Halstead software metrics program length, Program volume, Program level and program
difficulty, Intelligent content, Programming effort, Error estimate, and Programming time.
Line count software metrics: Lines of code, Lines of comment, Lines of mixed code and
comments, and Lines left blank.
Q39: What is the "bug life cycle"?
A: Bug life cycles are similar to software development life cycles. At any time during the software
development life cycle errors can be made during the gathering of requirements, requirements
analysis, functional design, internal design, documentation planning, document preparation,
coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and
phase-out.
Bug life cycle begins when a programmer, software developer, or architect makes a mistake,
creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the bug
is no longer in existence.
What should be done after a bug is found? When a bug is found, it needs to be communicated
and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.
Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable / acceptable, based on client input.
Test document templates are often in the form of documents that are divided into sections and
subsections. One example of this template is a 4-section document, where section 1 is the "Test
Objective", section 2 is the "Scope of Testing", section 3 is the "Test Approach", and section 4 is
the "Focus of the Testing Effort".
All documents should be written to a certain standard and template. Standards and templates
maintain document uniformity. It also helps in learning where information is located, making it
easier for a user to find what they want. With standards and templates, information will not be
accidentally omitted from a document.
The completed document will help people outside the test group understand the why and how of
product validation. It should be thorough enough to be useful, but not so thorough that no one
outside the test group will be able to read it.
Automated testing tools sometimes do not make testing easier. One problem with automated
testing tools is that if there are continual changes to the product being tested, the recordings have
to be changed so often, that it becomes a very time-consuming task to continuously update the
scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that
can be a time-consuming task.
Q46: How can I learn to use WinRunner, without any outside help?
A: I suggest you read all you can, and that includes reading product description pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on. Then the next step is actual practice, the gathering of hands-on experience on how to use
WinRunner.
If there is a will, there is a way. You CAN do it, if you put your mind to it. You CAN learn to use
WinRunner, with little or no outside help.
In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes.
Classes, especially non-degree courses in community colleges, tend to be inexpensive.
To give you some recent examples, some of the software tools on end clients' lists of
requirements include LabView, LoadRunner, Rational Tools and Winrunner.
But, as a general rule of thumb, there are many-many other items on their lists, depending on the
end client, their needs and preferences.
It is worth repeating... the answer to this question can and will change from one day to the next.
What is in demand today will not likely be in demand tomorrow.
SCM is the control, and the recording of, changes that are made to the software and
documentation throughout the software development life cycle (SDLC).
SCM covers the tools and processes used to control, coordinate and track code, requirements,
documentation, problems, change requests, designs, tools, compilers, libraries, patches, and
changes made to them, and to keep track of who makes the changes.
We, test engineers have experience with a full range of CM tools and concepts, and can easily
adapt to an organization's software tool and process needs.
CVS, or "Concurrent Version System", is a popular, open source version control system to keep
track of changes in documents associated with software projects. CVS enables several, often
distant, developers to work together on the same source code.
PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX
program, based on "diff". Diff is a UNIX utility that compares the difference between two text files.
Q54: Which of these roles are the best and most popular?
A: In testing, Tester roles tend to be the most popular. The less popular roles include the roles of
System Administrator, Test/QA Team Lead, and Test/QA Managers.
Depending on the project, one person can and often wear more than one hat. For instance, we
Test Engineers often wear the hat of Technical Analyst, Test Build Manager and Test
Configuration Manager as well.
The complex answer is, "Priority means something is afforded or deserves prior attention; a
precedence established by order of importance (or urgency). Severity is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and often
suggests harshness; severe is marked by or requires strict adherence to rigorous standards or
high principles, e.g. a severe code of behavior."
"Effective", on the other hand, means producing, or capable of producing, an intended result, or
having a striking effect. For example, "For automated testing, WinRunner is more effective than
an oscilloscope", or "For rapid long-distance transportation, the jet engine is more effective than a
witch's broomstick".
The inputs of verification are checklists, issues lists, walk-throughs and inspection meetings,
reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual
product.
The output of verification is a nearly perfect set of documents, plans, specifications, and
requirements document. The output of validation, on the other hand, is a nearly perfect, actual
product
Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily
adapt to your software tool and process needs.
For example, if, out of 168 hours, a system has been busy for 50 hours, idle for 110 hours, and
down for 8 hours, then the busy time is 50 hours, idle time is 110 hours, and up time is (110 + 50
=) 160 hours.
Typically, what is described are system and component capabilities, limitations, options, permitted
inputs, expected outputs, error messages, and special instructions
Typically, what is described are system and component capabilities, limitations, options, permitted
inputs, expected outputs, error messages, and special instructions.
Typically the VDD includes a description, and identification of the software, identification of
changes incorporated into this version, and installation and operating information unique to this
version of the software
For integration testing, test cases are developed with the express purpose of exercising the
interfaces between the components.
For system testing, on the other hand, the complete system is configured in a controlled
environment, and test cases are developed to simulate real life scenarios that occur in a
simulated real life test environment.
The purpose of integration testing is to ensure distinct components of the application still work in
accordance to customer requirements.
The purpose of system testing, on the other hand, is to validate an application's accuracy and
completeness in performing the functions as designed, and to test all functions of the system that
are required in real life
The term 'performance testing' is often used synonymously with stress testing, load testing,
reliability testing, and volume testing.
The subject of the PDR is typically a code block, release, or feature, or document. The purpose of
the PDR is to find problems and see what is missing, not to fix anything.
The result of the meeting is documented in a written report. Attendees should prepare for PDRs
by reading through documents, before the meeting starts; most problems are found during this
preparation.
Why are PDRs so useful? Because PDRs are cost-effective methods of ensuring quality, because
bug prevention is more cost effective than bug detection.
Q98: How do you test the password field?
A: To test the password field, we do boundary value testing.
A baseline set of data and scripts are maintained and executed, to verify that changes introduced
during the release have not "undone" any previous code.
Expected results from the baseline are compared to results of the software under test. All
discrepancies are highlighted and accounted for, before testing proceeds to the next level.
Q103: What types of white box testing can you tell me about?
A: White box testing is a testing approach that examines the application's program structure, and
derives test cases from the application's program logic.
Clear box testing is a white box type of testing. Glass box testing is also a white box type of
testing. Open box testing is also a white box type of testing.
Q104: What black box testing types can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal software design
or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-
box type of testing geared to functional requirements of an application.
System testing is also a black box type of testing. Acceptance testing is also a black box type of
testing. Functional testing is also a black box type of testing. Closed box testing is also a black
box type of testing. Integration testing is also a black box type of testing
Conversely, if the initial testing approach was automated testing, then the regression testing is
normally performed by automated testing.
Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is
allowed for planning, design, testing, bug fixing, re-testing, changes, and documentation.
Software QA/testing is easy, if new features are avoided, and if one sticks to initial requirements
as much as possible.
Other terms, e.g. software defect and software failure, are more specific.
There are many who believe the word 'bug' is a reference to insects that caused malfunctions in
early electromechanical computers (in the 1950s and 1960s), the truth is the word 'bug' has been
part of engineering jargon for 100+ years. Thomas Edison, the great inventor, wrote the followings
in 1878: "It has been just so in all of my inventions. The first step is an intuition, and comes with a
burst, then difficulties arise—this thing gives out and [it is] then that "Bugs" — as such little faults
and difficulties are called — show themselves and months of intense watching, study and labor
are requisite before commercial success or failure is certainly reached."
Rob Davis believes that using this methodology is important in the development and ongoing
maintenance of his customers' applications
The test team analyzes the requirements, writes the test strategy and reviews the plan with the
project team.
The test plan may include test cases, conditions, the test environment, and a list of related tasks,
pass/fail criteria and risk assessment.
A description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.
A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules.
Functional and technical requirements of the application. This information comes from
requirements, change request, technical, and functional design documents.
If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use
WinRunner, and many other automated testing tools, with little or no outside help. Click on a link!
We call them "monkeys" because it is widely believed, if we allow six monkeys to pound on six
typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.
"Smart monkeys" are valuable for load and stress testing, and will find a significant number of
bugs, but they're also very expensive to develop.
"Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic
testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and
crashes, i.e. the bugs you least want to have in your software product.
"Monkey testing" can be valuable, but they should not be your only testing.
Stochastic testing is black box testing, random testing, performed by automated testing tools.
Stochastic testing is a series of random tests over time.
The software under test typically passes the individual tests, but our goal is to see if it can pass a
large series of the individual tests.
When we create a set of mutant software, each mutant software differs from the original software
by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each
mutant software contains only one single fault.
When we apply test cases to the original software and to the mutant software, we evaluate if our
test case is adequate.
Our test case is inadequate, if both the original software and all mutant software generate the
same output.
Our test case is adequate, if our test case detects faults, or, if, at least one mutant software
generates a different output than does the original software for our test case
Your end client requires a PDR, because they work on a product, and want to come up with the
very best possible design and documentation.
Your end client requires you to have a PDR, because when you organize a PDR, you invite and
assemble the end client's best experts and encourage them to voice their concerns as to what
should or should not go into the design and documentation, and why.
When you're a developer, designer, author, or writer, it's also to your advantage to come up with
the best possible design and documentation.
Therefore you want to embrace the idea of the PDR, because holding a PDR gives you a
significant opportunity to invite and assemble the end client's best experts and make them work
for you for one hour, for your own benefit.
To come up with the best possible design and documentation, you want to encourage your end
client's experts to speak up and voice their concerns as to what should or should not go into your
design and documentation, and why.
Remember, PDRs are not about you, but about design and documentation. Please don't be
negative; please do not assume your company is finding fault with your work, or distrusting you in
any way. There is a 90+ per cent probability your company wants you, likes you and trust you,
because you're a specialist, and because your company hired you after a long and careful
selection process.
Your company requires a PDR, because PDRs are useful and constructive. Just about everyone -
even corporate chief executive officers (CEOs) - attend PDRs from time to time. When a
corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When a CEO
attends a PDR, the meeting is called the "annual shareholders' meeting".
Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you
produce better designs and better documents than the ones you could come up with, without the
help of your meeting attendees
Number 4: It's technical expertise that counts the most, but many times you can influence your
group just as much, or even more so, if you're dominant or have good acting skills.
Number 5: PDRs are easy, because, even at the best and biggest companies, you can dominate
the meeting by being either very negative, or very bright and wise.
Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and
wisest meeting attendees are usually gentle on you; they deliver gentle suggestions that are
constructive, not destructive.
Number 7: You get many-many chances to express your ideas, every time a meeting attendee
asks you to justify why you wrote what you wrote.
Number 8: PDRs are effective, because there is no need to wait for anything or anyone; because
the attendees make decisions quickly (as to what errors are in your document). There is no
confusion either, because all the group's recommendations are clearly written down for you by the
PDR's facilitator.
Number 9: Your work goes faster, because the group itself is an independent decision making
authority. Your work gets done faster, because the group's decisions are subject to neither
oversight nor supervision.
Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and
they work for you, for FREE!
1. Verify that the attendees have inspected all the relevant documents and reports, and
2. Verify that all suggestions and recommendations for each issue have been recorded, and
3. Verify that all relevant facts of the meeting have been recorded.
5. "What is the outcome of this peer review?" At the end of the peer review, the facilitator asks the
attendees of the peer review to make a decision as to the outcome of the peer review. I.e., "What
is our consensus?" "Are we accepting the design (or document or code)?"
In my experience, the most useful peer reviews are the ones where you're the author of
something. Why? Because when you're the author, then it's you who decides what to do and how,
and it's you who receives all the free help.
In my experience, on the long run, the inputs of your additional reviewers and additional
attendees can be the most valuable to you and your company. But, in your own best interest, in
order to expedite things, before every peer review it is a good idea to get together with the
additional reviewer and additional attendee, and talk with them about issues, because if you don't,
they will be the ones with the largest number of questions and usually negative feedback.
When a PDR is done right, it is useful, beneficial, pleasant, and friendly. Generally speaking, the
fewer people show up at the PDR, the easier it tends to be, and the earlier it can be adjourned.
When you're an author, developer, or task lead, many times you can relax, because during your
peer review your facilitator and test lead are unlikely to ask any tough questions from you. Why?
Because, the facilitator is too busy taking notes, and the test lead is kind of bored (because he
had already asked his toughest questions before the PDR).
When you're a facilitator, every PDR tends to be a pleasant experience. In my experience, one of
the easiest review meetings are PDRs where you're the facilitator (whose only job is to call the
shots and make notes
Peer design reviews can be classified according to the 'subject' of the review. I.e., "Is this a
document review, design review, or code review?"
Peer design reviews can be classified according to the 'role' you play at the meeting. I.e., "Are
you the task lead, test lead, facilitator, moderator, or additional reviewer?"
Peer design reviews can be classified according to the 'job title of attendees. I.e., "Is this a
meeting of peers, managers, systems engineers, or system integration testers?"
Peer design reviews can be classified according to what is being reviewed at the meeting. I.e.,
"Are we reviewing the work of a developer, tester, engineer, or technical document writer?"
Peer design reviews can be classified according to the 'objective' of the review. I.e., "Is this
document for the file cabinets of our company, or that of the government (e.g. the FAA or FDA)?"
PDRs of government documents tend to attract the attention of managers, and the meeting
quickly becomes a meeting of managers.
Q132: How can I shift my focus and area of work from QC to QA?
A: Number one, focus on your strengths, skills, and abilities! Realize that there are MANY
similarities between Quality Control and Quality Assurance! Realize that you have MANY
transferable skills!
Number two, make a plan! Develop a belief that getting a job in QA is easy! HR professionals
cannot tell the difference between quality control and quality assurance! HR professionals tend to
respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!
Number three, make it a reality! Invest your time! Get some hands-on experience! Do some QA
work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your goals,
beliefs, enthusiasm, and action will make a huge difference in your life!
Number four, I suggest you read all you can, and that includes reading product pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on! If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to do
QA work, with little or no outside help! Click on a link!
Number two, make a plan! Develop a belief that getting a job in QA is easy! HR professionals
cannot tell the difference between quality control and quality assurance! HR professionals tend to
respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!
Number four, I suggest you read all you can, and that includes reading product pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on!
If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to do QA
work, with little or no outside help! Click on a link!
Difference number one: Builds refer to software that is still in testing, release refers to software
that is usually no longer in testing.
Difference number two: Builds occur more frequently; releases occur less frequently.
Difference number three: Versions are based on builds, and not vice versa. Builds, or usually a
series of builds, are generated first, as often as one build per every morning, depending on the
company, and then every release is based on a build, or several builds, i.e. the accumulated code
of several builds.
There are not many Level 5 companies; most hardly need to be. Within the United States, fewer
than 8% of software companies are rated CMM Level 4, or higher. The U.S. government requires
that all companies with federal government contracts to maintain a minimum of a CMM Level 3
assessment.
CMM assessments take two weeks. They're conducted by a nine-member team led by a SEI-
certified lead assessor
CMM level 1 is called "initial". The software process is at CMM level 1, if it is an ad hoc process.
At CMM level 1, few processes are defined, and success, in general, depends on individual effort
and heroism.
CMM level 2 is called "repeatable". The software process is at CMM level 2, if the subject
company has some basic project management processes, in order to track cost, schedule, and
functionality. Software processes are at CMM level 2, if necessary processes are in place, in
order to repeat earlier successes on projects with similar applications. Software processes are at
CMM level 2, if there are requirements management, project planning, project tracking,
subcontract management, QA, and configuration management.
CMM level 3 is called "defined". The software process is at CMM level 3, if the software process
is documented, standardized, and integrated into a standard software process for the subject
company. The software process is at CMM level 3, if all projects use approved, tailored versions
of the company's standard software process for developing and maintaining software. Software
processes are at CMM level 3, if there are process definition, training programs, process focus,
integrated software management, software product engineering, intergroup coordination, and
peer reviews.
CMM level 4 is called "managed". The software process is at CMM level 4, if the subject company
collects detailed data on the software process and product quality, and if both the software
process and the software products are quantitatively understood and controlled. Software
processes are at CMM level 4, if there are software quality management (SQM) and quantitative
process management.
Generally speaking, we, software test engineers, discover BOTH bugs and defects, before bugs
and defects damage the reputation of our company. We, QA engineers, use the software much
like real users would, to find BOTH bugs and defects, to find ways to replicate BOTH bugs and
defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e.
tell them if they've achieved the desired level of quality. Therefore, we, software QA engineers, do
not differentiate between bugs and defects. In our bug reports, we include BOTH bugs and
defects, and any differences between them are minor.
Difference number one: In bug reports, the defects are usually easier to describe.
Difference number two: In bug reports, it is usually easier to write the descriptions on how to
replicate the defects. Defects tend to require brief explanations only.
Q141hat is grey box testing?
A: Grey box testing is a software testing technique that uses a combination of black box testing
and white box testing. Gray box testing is not black box testing, because the tester does know
some of the internal workings of the software under test.
In grey box testing, the tester applies a limited number of test cases to the internal workings of
the software under test. In the remaining part of the grey box testing, one takes a black box
approach in applying inputs to the software under test and observing the outputs.
Gray box testing is a powerful idea. The concept is simple; if one knows something about how the
product works on the inside, one can test it better, even from the outside.
Grey box testing is not to be confused with white box testing; i.e. a testing approach that attempts
to cover the internals of the product in detail. Grey box testing is a test strategy based partly on
internals.
The testing approach is known as gray box testing, when one does have some knowledge, but
not the full knowledge of the internals of the product one is testing.
In gray box testing, just as in black box testing, you test from the outside of a product, just as you
do with black box, but you make better-informed testing choices because you're better informed;
because you know how the underlying software components operate and interact.
The two terms, version and release, are similar (i.e. mean pretty much the same thing), but there
are minor differences between them.
Version means a VARIATION of an earlier, or original, type; for example, "I've downloaded the
latest version of the software from the Internet. The latest version number is
3.3."
Release, on the other hand, is the ACT OR INSTANCE of issuing something for publication, use,
or distribution. Release is something thus released. For example, "A new release of a software
program."
Data integrity is the completeness, soundness, and wholeness of the data that also complies with
the intention of the creators of the data.
In databases, important data -- including customer information, order database, and pricing tables
-- may be stored.
Testing should be performed on a regular basis, because important data can and will change over
time.
1. Verify that you can create, modify, and delete any data in tables.
2. Verify that sets of radio buttons represent fixed sets of values.
4. Verify that, when a particular set of data is saved to the database, each value gets saved fully,
and the truncation of strings and rounding of numeric values do not occur.
5. Verify that the default values are saved in the database, if the user input is not specified.
6. Verify compatibility with old data, old hardware, versions of operating systems, and interfaces
with other software
Q145: What is data validity?
A: Data validity is the correctness and reasonablenesss of data. Reasonableness of data means,
for example, account numbers falling within a range, numeric data being all digits, dates having a
valid month, day and year, spelling of proper names.
Data validity errors are probably the most common, and the most difficult to detect, data-related
errors.
Data validity errors are usually caused by incorrect data entries, when a large volume of data is
entered in a short period of time.
For example, 12/25/2005 is entered as 13/25/2005 by mistake. This date is therefore invalid.
How can you reduce data validity errors? Use simple field validation rules.
Technique 1: If the date field in a database uses the MM/DD/YYYY format, then use a program
with the following two data validation rules: "MM should not exceed 12, and DD should not
exceed 31".
Technique 2: If the original figures do not seem to match the ones in the database, then use a
program to validate data fields. Compare the sum of the numbers in the database data field to the
original sum of numbers from the source. If there is a difference between the figures, it is an
indication of an error in at least one data element.
Number three: Get additional education, on the job, at the bank or financial institution where you
work. Free education is often provided by employers, while you are paid to do the job of a
Software Test Engineer.
On the job, oftentimes you can use some of the world's best software tools, including the Rational
Toolset, and there are many others. If your immediate manager is reluctant to train you on the job,
in order to do your job, then quietly find another banker, i.e. another employer, whose needs and
preferences are similar to yours.
TestDirector's Requirements Manager links test cases to requirements, ensures traceability, and
calculates what percentage of the requirements are covered by tests, how many of these tests
have been run, and how many have passed or failed.
As to planning, test plans can be created, or imported, for both manual and automated tests. The
test plans can then be reused, shared, and preserved. As to running tests, the TestDirector’s Test
Lab Manager allows you to schedule tests to run unattended, or run even overnight.
The TestDirector's Defect Manager supports the entire bug life cycle, from initial problem
detection through fixing the defect, and verifying the fix. Additionally, the TestDirector can create
customizable graphs and reports, including test execution reports and release status
assessments.
Structural testing is white box testing, not black box testing, since black boxes are considered
opaque and do not permit visibility into the code.
Difference number 1: Static testing is about prevention, dynamic testing is about cure.
Difference number 3: Static testing is many times more cost-effective than dynamic testing.
Difference number 6: Static testing gives you comprehensive diagnostics for your code.
Difference number 7: Static testing achieves 100% statement coverage in a relatively short time,
while dynamic testing often often achieves less than 50% statement coverage, because dynamic
testing finds bugs only in parts of the code that are actually executed.
Difference number 8: Dynamic testing usually takes longer than static testing. Dynamic testing
may involve running several test cases, each of which may take longer than compilation.
Difference number 9: Dynamic testing finds fewer bugs than static testing.
Difference number 10: Static testing can be done before compilation, while dynamic testing can
take place only after compilation and linking.
Difference number 11: Static testing can find all of the following that dynamic testing cannot find:
syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform
to coding standards, and ANSI violations.
Dynamic testing does detect some errors that static testing misses. To eliminate as many errors
as possible, both static and dynamic testing should be used.
All this static testing (i.e. testing for syntax errors, testing for code that is hard to maintain, testing
for code that is hard to test, testing for code that does not conform to coding standards, and
testing for ANSI violations) takes place before compilation. Static testing takes roughly as long as
compilation and checks every statement you have written.
Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs
by static testing is many times lower than that of by dynamic testing.
If you use neither static nor dynamic test tools, the static tools offer greater marginal benefits.
If urgent deadlines loom on the horizon, the use of dynamic testing tools can be omitted, but tool-
supported static testing should never be omitted.
You also have to be at least 18 years of age, trustworthy, with no criminal record. You also have
to have a minimum of a bachelor's degree in engineering, from an established, recognized, and
approved university.
Usually you have to provide two references, from licensed and professional engineers, and work
for a few years as an engineer, as an "engineer in training", under the supervision of a registered
and licensed professional engineer. You have to pass a test of competence in your engineering
discipline as well as in professional ethics.
For many candidates, the biggest two hurdles of getting a license seem to be the lack of a
university degree in engineering, or the lack of an acceptable, verifiable work experience, under
the supervision of a licensed, professional engineer.
Q158: I don't have any experience. How can I get my first
experience?
A: I see MANY possibilities.
Possibility number 2: Know someone, and you WILL get your first job!
Possibility number 3: Sell yourself well! If you are confident, you WILL get your first job! Make
yourself shine, and the job will fall in your lap!
Possibility number 4: Speak to a manager, make a good impression, and you WILL get your first
job!
Possibility number 5: Attend a school of good reputation. If your prospective boss is familiar with
the school, you WILL get your first job!
Possibility number 6: Attend a school that offers job placement, with a real record of job
placement assistance. Then do what they say, and then you WILL get your first
job!
Possibility number 7: Believe in yourself, be confident, and you WILL get your first job!
Possibility number 8: Ask employment agencies. They usually keep in touch with various
companies. Sometimes they're friends with managers. Other times they're unusually well-
informed. They will help you to get your first job!
Possibility number 10: Get your first job by training yourself. Training yourself on a PC (or Mac),
with the proper software, can be useful, if you spend your time to use it to its maximum potential!
You can get more information! You can get the information now, right now! Click on a link!
In other words, top down design starts the design process with the main module or system, then
progresses down to lower level modules and subsystems.
To put it differently, top down design looks at the whole system, and then explodes it into
subsystems, or smaller parts. A systems engineer or systems analyst determines what the top
level objectives are, and how they can be met. He then divides the system into subsystems, i.e.
breaks the whole system into logical, manageable-size modules, and deals with them individually.
Technical skills mean skills in IT, quantitative analysis, data modeling, and technical writing.
Business skills mean skills in strategy and business writing. Personal skills mean personal
communication, leadership, teamwork, and problem-solving skills.
We, employees, on the other hand, want increasingly more autonomy, better lifestyle, increasingly
more employee oriented company culture, and better geographic location. We will continue to
enjoy relatively good job security and, depending on the business cycle, many job opportunities
as well.
We realize our skills are important, and have strong incentives to upgrade our skills, although
sometimes lack the information on how to do so. Educational institutions are increasingly more
likely to ensure that we are exposed to real-life situations and problems, but high turnover rates
and a rapid pace of change in the IT industry will often act as strong disincentives for employers
to invest in our skills, especially non-company specific skills. Employers will continue to establish
closer links with educational institutions, both through in-house education programs and human
resources.
The share of IT workers with IT degrees will keep increasing. Certification will continue to keep
helping employers to quickly identify us with the latest skills. During boom times, smaller and
younger companies will continue to be the most attractive to us, especially those companies that
offer stock options and performance bonuses in order to retain and attract those of us who are
most skilled.
Verify that the site is customer-friendly. Verify that the choices of colors are attractive. Verify that
the choices of fonts are attractive. Verify that the site's audio is customer friendly. Verify that the
site's video is attractive. Verify that the choice of graphics is attractive. Verify that every page of
the site is displayed properly on all the popular browsers. Verify the authenticity of facts.
Ensure the site provides reliable and consistent information. Test the site for appearance. Test the
site for grammatical and spelling errors. Test the site for visual appeal, choice of browsers,
consistency of font size, download time, broken links, missing links, incorrect links, and browser
compatibility. Test each toolbar, each menu item, every window, every field prompt, every pop-up
text, and every error message.
Test every page of the site for left and right justifications, every shortcut key, each control, each
push button, every radio button, and each item on every drop-down menu. Test each list box, and
each help menu item. Also check, if the command buttons are grayed out when they're not in use.
When the design is backward compatible, the signals or data that had to be changed, did not
break the existing code.
For instance, our mythical web designer decides that the fun of using Javascript and Flash is
more important than backward compatible design, or, he decides that he doesn't have the
resources to maintain multiple styles of backward compatible web design.
This decision of his will inconvenience some users, because some of the earlier versions of
Internet Explorer and Netscape will not display his web pages properly, as there are some serious
improvements in the newer versions of Internet Explorer and Netscape that make the older
versions of these browsers incompatible with, for example, DHTML.
This is when we say, "This design doesn't continue to work with earlier versions of browser
software. Therefore our mythical designer's web design is not backward compatible".
On the other hand, if the same mythical web designer decides that backward compatibility is
more important than fun, or, if he decides that he has the resources to maintain multiple styles of
backward compatible code, then no user will be inconvenienced.
No one will be inconvenienced, even when Microsoft and Netscape make some serious
improvements in their web browsers.
This is when we can say, "Our mythical web designer's design is backward compatible
Top down design is most often used in designing brand new systems, while bottom up design is
sometimes used when one is reverse engineering a design; i.e. when one is trying to figure out
what somebody else designed in an existing system.
Bottom up design begins the design with the lowest level modules or subsystems, and
progresses upward to the main program, module, or subsystem.
With bottom up design, a structure chart is necessary to determine the order of execution, and
the development of drivers is necessary to complete the bottom up approach.
Top down design, on the other hand, begins the design with the main or top-level module, and
progresses downward to the lowest level modules or subsystems.
Real life sometimes is a combination of top down design and bottom up design.
For instance, data modeling sessions tend to be iterative, bouncing back and forth between top
down and bottom up modes, as the need arises.
Q165: What is the defintion of bottom up design?
A: Bottom up design begins the design at the lowest level modules or subsystems, and
progresses upward to the design of the main program, main module, or main subsystem.
To determine the order of execution, a structure chart is needed, and, to complete the bottom up
design, the development of drivers is needed.
In software design - assuming that the data you start with is a pretty good model of what you're
trying to do - bottom up design generally starts with the known data (e.g. customer lists, order
forms), then the data is broken into into chunks (i.e. entities) appropriate for planning a relational
database.
This process reveals what relationships the entities have, and what the entities' attributes are.
In software design, bottom up design doesn't only mean writing the program in a different order,
but there is more to it. When you design bottom up, you often end up with a different program.
Instead of a single, monolithic program, you get a larger language, with more abstract operators,
and a smaller program written in it.
Once you abstract out the parts which are merely utilities, what is left is much shorter program.
The higher you build up the language, the less distance you will have to travel down to it, from the
top. Bottom up design makes it easy to reuse code blocks.
For example, many of the utilities you write for one program are also useful for programs you
have to write later. Bottom up design also makes programs easier to read.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is
carried out by a skilled tester, it can often find problems that are not caught during regular testing.
Sometimes, if testing occurs very early or very late in the software development cycle, this can be
the only kind of testing that can be performed.
Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of
smoke testing.
A common practice at Microsoft, and some other software companies, is the daily build and
smoke test process. This means, every file is compiled, linked, and combined into an executable
file every single day, and then the software is smoke tested.
Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect
diagnosis, and improves morale.
Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke
testing should be thorough enough that, if it passes, the tester can assume the product is stable
enough to be tested more thoroughly.
Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry
that guards against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then, as the system
grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.
Difference number 2: Monkey testing is performed by automated testing tools. On the other hand,
smoke testing, more often than not, is a manual check to see whether the product "smokes" when
it runs.
Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very
valuable for smoke testing, because they are too expensive for smoke testing.
Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic
testing, but, if we use them for smoke testing, they find few bugs during smoke testing.
Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough
enough that, if the build passes, one can assume that the program is stable enough to be tested
more thoroughly.
Difference number 7: Monkey testing does not evolve. Smoke testing, on the other hand, evolves
as the system evolves from something simple to something more thorough.
Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke
testing, on the other hand, takes much less time to run, i.e. anywhere from a few seconds to a
couple of hours.
Q168: Tell me about the process of daily builds and smoke tests.
A: The idea behind the process of daily builds and smoke tests is to build the product every day,
and test it every day.
The software development process at Microsoft and many other software companies requires
daily builds and smoke tests. According to their process, every day, every single file has to be
compiled, linked, and combined into an executable program. And, then, the program has to be
"smoke tested".
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs.
You should add revisions to the build only when it makes sense to do so. You should to establish
a Build Group, and build *daily*; set your *own standard* for what constitutes "breaking the build",
and create a penalty for breaking the build, and check for broken builds *every day*.
In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You
should make the smoke test Evolve, as the system evolves. You should build and smoke test
Daily, even when the project is under pressure.
Think about the many benefits of this process! The process of daily builds and smoke tests
minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis,
improves morale, enforces discipline, and keeps pressure-cooker projects on track.
If you build and smoke test *daily*, success will come, even when you're working on large
projects
The good thing is, when you want a QA Tester job, there are MANY possibilities!
Possibility number 1: Get a job with a company at a lower level, perhaps as a technician,
preferably at a small company, or a company that promotes from within. Once you're hired, work
your way up to the test bench, and you WILL get your first QA Tester experience!
Possibility number 2: Attend a school of good reputation. If your prospective boss is familiar with
your school, you will get your first job!
Possibility number 3: Attend a school that offers job placement, with a real record of job
placement assistance, and do what they say, and you WILL get your first job!
Possibility number 4: Work for a company as a volunteer, i.e. employee without pay. Once you're
hired, you WILL get your first experience!
Possibility number 5: Get your first job by training yourself. Training yourself on a PC with the
proper manual and automated testing tools, can be useful, if you spend your time to use it to its
maximum potential! Get some hands-on experience on how to use manual and automated testing
tools.
If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use
WinRunner and many other automated testing tools, with little or no outside help. Click on a link!
Reason number 2: Having a test strategy does satisfy one important step in the software testing
process.
Reason number 3: The test strategy document tells us how the software product will be tested.
Reason number 4: The creation of a test strategy document presents an opportunity to review the
test plan with the project team.
Reason number 5: The test strategy document describes the roles, responsibilities, and the
resources required for the test and schedule constraints.
Reason number 6: When we create a test strategy document, we have to put into writing any
testing issues requiring resolution (and usually this means additional negotiation at the project
management level).
Reason number 7: The test strategy is decided first, before lower level decisions are made on the
test plan, test design, and other testing issues