You are on page 1of 19

1 1ell me someLhlng abouL u r self?

2 Why are u looklng for change?


3 WhaL ls u r roles responslblllLles?
4 1ell me someLhlng abouL u pro[ecL?
3 Can u explaln u r pro[ecL archlLecLure or funcLlonal archlLecLure?
6 1ell me process u r followlng for 1esLlng?
7 WhaL ls LesLlng?
8 1esLlng ueflnlLlon?
9 Lnd Lo Lnd sysLem LesLlng? Pave u lnvolved are ln L2L LesLlng
10 SysLem 1esLlng? Pave u lnvolved are ln L2L LesLlng?
11 SanlLy Smoke 1esLlng? When? Why?
12 8egresslon and 8eLesLlng? When? Why?
13 1esLlng rocess ln u r Company? (Aglle v Models)
14 WhaL ls S1LC?
13 WhaL ls SuLC?
16 Are u lnvolved ln 1esL lan? 1esL plan 1emplaLe?
17 1esL sLraLegy?
18 CA CC
19 verlflcaLlon valldaLlon?
20 lnLegraLlon 1esLlng?
21 WhaL ls sLub urlvers?
22 1raceablllLy maLrlx? lorward and 8ackward 1raceablllLy maLrlx approaches wlLh examples?
23 Pow can u come Lo know LhaL u have covered all funcLlonallLy?
24 l have glven 1200 LesL cases for 1 day Pow u wlll precede 1esLlng?
23 8evlews (eer revlews walkLhroughs lnspecLlons formal revlews lmformal revlews)?
26 WaLerfall model Aglle model v model
27 uefecL 8ug Lrror?
28 Alpha LesLlng 8eLa 1esLlng?
29 user AccepLance 1esLlng?
30 non funcLlonal 1esLlng's?
31 luncLlonal 1esLlng?
32 Web 1esLlng?
33 unlL 1esLlng? unlL LesLlng Lechnlques?
34 8lock box 1esLlng? 8lock box 1esLlng 1echnlques?
33 uefecL llfe cycle
36 WhaL are Lhe challenges u faced?
37 lmporLance of ueferred?
38 WhaL are quallLy sLandards u wlll Lhlnk Lo lmprove LesLlng?
39 Aglle v Model?
40 8eLrospecLlve call?
41 uash 8oards?
42 uefecL Lracklng Lool? verslon?
43 erformance 1esLlng?
44 Load SLress 1esLlng?
43 roducL based ro[ecL based company?
46 MeLrlcs? luncLlon olnL MeLrlcs?
47 1esL esLlmaLlon?
48 1esL uaLa?
49 recondlLlon osL condlLlons for 1esL uaLa ln 1esL case?
30 Pow do u ensure LhaL u covered whole funcLlonallLy?
31 lf u faces any problem whlle 1esLlng Pow u wlll proceed?
32 1esL case formaL?
33 Pow many defecLs u ralsed Llll now ln 1esLlng?
34 ln Lhls quarLer how many defecLs have u ralsed?
33 8ellable 1esLlng recovery ls LesLlng?
36 CompaLlblllLy 1esLlng?
37 LnLry LxlL crlLerla?
38 1emporary varlables for 1esLlng Lo cross a game?
39 1o cuL a cake lnLo 8 pleces wlLh 3 cuLLlngs only?
60 CllenL Server 1esLlng?
61 Logln 1esL cases?
62 1esL scenarlos?
63 CMM Levels CC CMM Levels and Slx slgmas?
64 1esL log?
63 1esL bed?
66 1esLlng envlronmenL?
67 uevelopmenL LnvlronmenL?
68 rlorlLy SeverlLy?
69 1 S1 example
70 1 S2 example
71 1 S3 example
72 2 S1 example
73 2 S2 example
74 2 S3 example
73 3 or p4 S1 example?
76 3 or p4 S2 example?
77 3 or p4 S3 example?
78 When u found bug whaL u wlll do?
79 lf deleveoper wlll re[ecL defecL whaL u wlll do?
80 WhaL ls 8CA SoluLlon summary when Lhese are requlred?
81 WrlLe Lhe 3 LesL cases for u r appllcaLlon?
82 Pow u know all requlremenLs covered or noL?
83 WhaL ls funcLlon polnL? Pow Lo calculaLe?
84 WhaL ls meLrlcs? 1ypes?
83 lf l wlll glve a pro[ecL how u wlll proceed?
86 Adhoc 1esLlng? When and why?
87 ComponenL lnLegraLlon LesLlng?
88 lnLerface 1esLlng?
89 Pave u prepared any 1esLlng documenLs?
90 1esL summary reporL formaLs?
91 Who wlll flx Lhe bug?
92 Who wlll close Lhe bug?
93 WhaL ls LesL coverage?
94 WhaL ls 1esLlng scope?
93 SLaLlc LesLlng uynamlc 1esLlng?
96 Pow much raLlng u wlll glve for manual LesLlng ur self?
97 WhaL ls Lhe conflguraLlon managemenL Lool ur uslng?
98 Pow ls Lhe LesLlng envlronmenL dlfferenL from producLlon envlronmenL?
99 1esL log documenL? 1esL bed?
100 lf u have less Llme for LesLlng how u wlll proceed?
101 uA1?
102 D|fference between Web test|ng and c||ent server test|ng
103 WhaL are use cases?
Test Coverage Matrix
103 What |s Software ua||ty Assurance?
SofLware CA lnvolves Lhe enLlre sofLware developmenL 8CCLSS monlLorlng and lmprovlng Lhe
process maklng sure LhaL any agreedupon sLandards and procedures are followed and
ensurlng LhaL problems are found and dealL wlLh lL ls orlenLed Lo prevenLlon (See Lhe
8ooksLore secLlons SofLware CA caLegory for a llsL of useful books on SofLware CuallLy
Assurance)
at is verification? validation?
VeriIication typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and speciIications This can be done with checklists, issues lists, walkthroughs,
and inspection meetings Validation typically involves actual testing and takes place aIter
veriIications are completed The term 'IV & V' reIers to Independent VeriIication and Validation
Return to top oI this page's FAQ list
at is a 'walktroug'?
A 'walkthrough' is an inIormal meeting Ior evaluation or inIormational purposes Little or no
preparation is usually required
Return to top oI this page's FAQ list
at's an 'inspection'?
An inspection is more Iormalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes The subject oI the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to Iind problems and see
what's missing, not to Iix anything Attendees should prepare Ior this type oI meeting by reading
thru the document; most problems will be Iound during this preparation The result oI the
inspection meeting should be a written report Thorough preparation Ior inspections is diIIicult,
painstaking work, but is one oI the most cost eIIective methods oI ensuring quality Employees
who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it oIten
hard Ior organizations to get serious about quality assurance?' Their skill may have low visibility
but they are extremely valuable to any soItware development organization, since bug prevention
is Iar more cost-eIIective than bug detection
Return to top oI this page's FAQ list
at kinds of testing sould be considered?
O lack box testing - not based on any knowledge oI internal design or code Tests are
based on requirements and Iunctionality
O White box testing - based on knowledge oI the internal logic oI an application's code
Tests are based on coverage oI code statements, branches, paths, conditions
O unit testing - the most 'micro' scale oI testing; to test particular Iunctions or code
modules Typically done by the programmer and not by testers, as it requires detailed
knowledge oI the internal program design and code Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses
O incremental integration testing - continuous testing oI an application as new Iunctionality
is added; requires that various aspects oI an application's Iunctionality be independent
enough to work separately beIore all parts oI the program are completed, or that test
drivers be developed as needed; done by programmers or by testers
O integration testing - testing oI combined parts oI an application to determine iI they
Iunction together correctly The 'parts' can be code modules, individual applications,
client and server applications on a network, etc This type oI testing is especially relevant
to client/server and distributed systems
O Iunctional testing - black-box type testing geared to Iunctional requirements oI an
application; this type oI testing should be done by testers This doesn't mean that the
programmers shouldn't check that their code works beIore releasing it (which oI course
applies to any stage oI testing
O system testing - black-box type testing that is based on overall requirements
speciIications; covers all combined parts oI a system
O end-to-end testing - similar to system testing; the 'macro' end oI the test scale; involves
testing oI a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems iI appropriate
O sanity testing or smoke testing - typically an initial testing eIIort to determine iI a new
soItware version is perIorming well enough to accept it Ior a major testing eIIort For
example, iI the new soItware is crashing systems every 5 minutes, bogging down systems
to a crawl, or corrupting databases, the soItware may not be in a 'sane' enough condition
to warrant Iurther testing in its current state
O regression testing - re-testing aIter Iixes or modiIications oI the soItware or its
environment It can be diIIicult to determine how much re-testing is needed, especially
near the end oI the development cycle Automated testing approaches can be especially
useIul Ior this type oI testing
O acceptance testing - Iinal testing based on speciIications oI the end-user or customer, or
based on use by end-users/customers over some limited period oI time
O load testing - testing an application under heavy loads, such as testing oI a web site under
a range oI loads to determine at what point the system's response time degrades or Iails
O stress testing - term oIten used interchangeably with 'load' and 'perIormance' testing Also
used to describe such tests as system Iunctional testing while under unusually heavy
loads, heavy repetition oI certain actions or inputs, input oI large numerical values, large
complex queries to a database system, etc
O perIormance testing - term oIten used interchangeably with 'stress' and 'load' testing
Ideally 'perIormance' testing (and any other 'type' oI testing is deIined in requirements
documentation or QA or Test Plans
O usability testing - testing Ior 'user-Iriendliness' Clearly this is subjective, and will depend
on the targeted end-user or customer User interviews, surveys, video recording oI user
sessions, and other techniques can be used Programmers and testers are usually not
appropriate as usability testers
O install/uninstall testing - testing oI Iull, partial, or upgrade install/uninstall processes
O recovery testing - testing how well a system recovers Irom crashes, hardware Iailures, or
other catastrophic problems
O Iailover testing - typically used interchangeably with 'recovery testing'
O security testing - testing how well the system protects against unauthorized internal or
external access, willIul damage, etc; may require sophisticated testing techniques
O compatability testing - testing how well soItware perIorms in a particular
hardware/soItware/operating system/network/etc environment
O exploratory testing - oIten taken to mean a creative, inIormal soItware test that is not
based on Iormal test plans or test cases; testers may be learning the soItware as they test
it
O ad-hoc testing - similar to exploratory testing, but oIten taken to mean that the testers
have signiIicant understanding oI the soItware beIore testing it
O context-driven testing - testing driven by an understanding oI the environment, culture,
and intended use oI soItware For example, the testing approach Ior liIe-critical medical
equipment soItware would be completely diIIerent than that Ior a low-cost computer
game
O user acceptance testing - determining iI soItware is satisIactory to an end-user or
customer
O comparison testing - comparing soItware weaknesses and strengths to competing
products
O alpha testing - testing oI an application when development is nearing completion; minor
design changes may still be made as a result oI such testing Typically done by end-users
or others, not by programmers or testers
O beta testing - testing when development and testing are essentially completed and Iinal
bugs and problems need to be Iound beIore Iinal release Typically done by end-users or
others, not by programmers or testers
O mutation testing - a method Ior determining iI a set oI test data or test cases is useIul, by
deliberately introducing various code changes ('bugs' and retesting with the original test
data/cases to determine iI the 'bugs' are detected Proper implementation requires large
computational resources
at are 5 common problems in te software development process?
O poor requirements - iI requirements are unclear, incomplete, too general, and not testable,
there may be problems
O unrealistic schedule - iI too much work is crammed in too little time, problems are
inevitable
O inadequate testing - no one will know whether or not the soItware is any good until
customers complain or systems crash
O Ieaturitis - requests to add on new Ieatures aIter development goals are agreed on
O miscommunication - iI developers don't know what's needed or customer's have
erroneous expectations, problems can be expected
In agile projects, problems oIten occur when the project diverges Irom agile principles (such as
Iorgetting that 'usiness people and developers must work together daily throughout the project'
- see the ManiIesto Ior Agile SoItware Development
(See the SoItwareqatestcom ookstore section's 'SoItware QA', 'SoItware Engineering', and
'Project Management' categories Ior useIul books with more inIormation
Return to top oI this page's FAQ list
at are 5 common solutions to software development problems?
O solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements
that are agreed to by all players In 'agile'-type environments, continuous close
coordination with customers/end-users is necessary to ensure that changing/emerging
requirements are understood
O realistic schedules - allow adequate time Ior planning, design, testing, bug Iixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out
O adequate testing - start testing early on, re-test aIter Iixes or changes, plan Ior adequate
time Ior testing and bug-Iixing 'Early' testing could include static code analysis/testing,
test-Iirst development, unit testing by developers, built-in testing and diagnostic
capabilities, automated post-build testing, etc
O stick to initial requirements where Ieasible - be prepared to deIend against excessive
changes and additions once development has begun, and be prepared to explain
consequences II changes are necessary, they should be adequately reIlected in related
schedule changes II possible, work closely with customers/end-users to manage
expectations In 'agile'-type environments, initial requirements may be expected to
change signiIicantly, requiring that true agile processes be in place and Iollowed
O communication - require walkthroughs and inspections when appropriate; make extensive
use oI group communication tools - groupware, wiki's, bug-tracking tools and change
management tools, intranet capabilities, etc; ensure that inIormation/documentation is
available and up-to-date - preIerably electronic, not paper; promote teamwork and
cooperation; use protoypes and/or continuous communication with end-users iI possible
to clariIy expectations
(See the SoItwareqatestcom ookstore section's 'SoItware QA', 'SoItware Engineering', and
'Project Management' categories Ior useIul books with more inIormation
Return to top oI this page's FAQ list
at is software 'quality'?
Quality soItware is reasonably bug-Iree, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable However, quality is obviously a
subjective term It will depend on who the 'customer' is and their overall inIluence in the scheme
oI things A wide-angle view oI the 'customers' oI a soItware development project might include
end-users, customer acceptance testers, customer contract oIIicers, customer management, the
development organization's management/accountants/testers/salespeople, Iuture soItware
maintenance engineers, stockholders, magazine columnists, etc Each type oI 'customer' will
have their own slant on 'quality' - the accounting department might deIine quality in terms oI
proIits while an end-user might deIine quality as user-Iriendly and bug-Iree (See the
SoItwareqatestcom ookstore section's 'SoItware QA' category Ior useIul books with more
inIormation
Return to top oI this page's FAQ list

at is SEI? CMM? CMMI? ISO? IEEE? ANSI? ill it elp?
O SEI 'SoItware Engineering Institute' at Carnegie-Mellon University; initiated by the
US DeIense Department to help improve soItware development processes
O CMM 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model
Integration', developed by the SEI It's a model oI 5 levels oI process 'maturity' that
determine eIIectiveness in delivering quality soItware It is geared to large organizations
such as large US DeIense Department contractors However, many oI the QA processes
involved are appropriate to any organization, and iI reasonably applied can be helpIul
Organizations can receive CMMI ratings by undergoing assessments by qualiIied
auditors
Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,
realistic planning, and configuration management
processes are in place; successful practices can
be repeated.

Level 3 - standard software development and maintenance processes
are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,
and products. Project performance is predictable,
and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The
impact of new processes and technologies can be
predicted and effectively implemented when required.


Perspective on CMM ratings: During 1997-2001, 1018 organizations
were assessed. Jf those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.

O ISO 'International Organisation Ior Standardization' - The ISO 98 standard
(which provides some clariIications oI the previous standard 9 concerns quality
systems that are assessed by outside auditors, and it applies to many kinds oI production
and manuIacturing organizations, not just soItware It covers documentation, design,
development, production, testing, installation, servicing, and other processes The Iull set
oI standards consists oI (aQ9-8 - Quality Management Systems Requirements;
(bQ9-5 - Quality Management Systems Fundamentals and Vocabulary;
(cQ9-9 - Quality Management Systems Guidelines Ior PerIormance
Improvements To be ISO 9 certiIied, a third-party auditor assesses an organization,
and certiIication is typically good Ior about 3 years, aIter which a complete reassessment
is required Note that ISO certiIication does not necessarily indicate quality products - it
indicates only that documented processes are Iollowed Also see http//wwwisoorg/ Ior
the latest inIormation In the US the standards can be purchased via the ASQ web site at
http//wwwasqorg/quality-press/
ISO 9 is a standard Ior the evaluation oI soItware quality and deIines six high level
quality characteristics that can be used in soItware evaluation It includes Iunctionality,
reliability, usability, eIIiciency, maintainability, and portability
O IEEE 'Institute oI Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard Ior SoItware Test Documentation' (IEEE/ANSI
Standard 89, 'IEEE Standard oI SoItware Unit Testing (IEEE/ANSI Standard 8,
'IEEE Standard Ior SoItware Quality Assurance Plans' (IEEE/ANSI Standard 73, and
others
O ANSI 'American National Standards Institute', the primary industrial standards body in
the US; publishes some soItware-related standards in conjunction with the IEEE and
ASQ (American Society Ior Quality
O Other soItware development/IT management process assessment methods besides CMMI
and ISO 9 include SPICE, Trillium, TickIT, ootstrap, ITIL, MOF, and CobiT
O See the SoItwareqatestcom 'Other Resources' section Ior Iurther inIormation available on
the web
Return to top oI this page's FAQ list
at is te 'software life cycle'?
The liIe cycle begins when an application is Iirst conceived and ends when it is no longer in use
It includes aspects such as initial concept, requirements analysis, Iunctional design, internal
design, documentation planning, test planning, coding, document preparation, integration,
testing, maintenance, updates, retesting, phase-out, and other aspects (See the
SoItwareqatestcom ookstore section's 'SoItware QA', 'SoItware Engineering', and 'Project
Management' categories Ior useIul books with more inIormation
at's te role of documentation in QA?
Generally, the larger the team/organization, the more useIul it will be to stress documentation, in
order to manage and communicate more eIIiciently (Note that documentation may be electronic,
not necessarily in printable Iorm, and may be embedded in code comments, may be embodied in
well-written test cases, user stories, etc QA practices may be documented to enhance their
repeatability SpeciIications, designs, business rules, conIigurations, code changes, test plans,
test cases, bug reports, user manuals, etc may be documented in some Iorm There would ideally
be a system Ior easily Iinding and obtaining inIormation and determining what documentation
will have a particular piece oI inIormation Change management Ior documentation can be used
where appropriate For agile soItware projects, it should be kept in mind that one oI the agile
values is "Working soItware over comprehensive documentation", which does not mean 'no'
documentation Agile projects tend to stress the short term view oI project needs; documentation
oIten becomes more important in a project's long-term context
Return to top oI this page's FAQ list
at steps are needed to develop and run software tests?
The Iollowing are some oI the steps to consider
O Obtain requirements, Iunctional design, and internal design speciIications, user stories,
and other available/necessary inIormation
O Obtain budget and schedule requirements
O Determine project-related personnel and their responsibilities, reporting requirements,
required standards and processes (such as release processes, change processes, etc
O Determine project context, relative to the existing quality culture oI the
product/organization/business, and how it might impact testing scope, aproaches, and
methods
O IdentiIy application's higher-risk and mor important aspects, set priorities, and determine
scope and limitations oI tests
O Determine test approaches and methods - unit, integration, Iunctional, system, security,
load, usability tests, etc
O Determine test environment requirements (hardware, soItware, conIiguration, versions,
communications, etc
O Determine testware requirements (automation tools, coverage analyzers, test tracking,
problem/bug tracking, etc
O Determine test input data requirements
O IdentiIy tasks, those responsible Ior tasks, and labor requirements
O Set schedule estimates, timelines, milestones
O Determine, where apprapriate, input equivalence classes, boundary value analyses, error
classes
O Prepare test plan document(s and have needed reviews/approvals
O Write test cases
O Have needed reviews/inspections/approvals oI test cases
O Prepare test environment and testware, obtain needed user manuals/reIerence
documents/conIiguration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
O Obtain and install soItware releases
O PerIorm tests
O Evaluate and report results
O Track problems/bugs and Iixes
O Retest as needed
O Maintain and update test plans, test cases, test environment, and testware through liIe
cycle
Return to top oI this page's FAQ list
at's a 'test plan'?
A soItware project test plan is a document that describes the objectives, scope, approach, and
Iocus oI a soItware testing eIIort The process oI preparing a test plan is a useIul way to think
through the eIIorts needed to validate the acceptability oI a soItware product The completed
document will help people outside the test group understand the 'why' and 'how' oI product
validation It should be thorough enough to be useIul but not so overly detailed that no one
outside the test group will read it The Iollowing are some oI the items that might be included in
a test plan, depending on the particular project
O Title
O IdentiIication oI soItware including version/release numbers
O Revision history oI document including authors, dates, approvals
O Table oI Contents
O Purpose oI document, intended audience
O Objective oI testing eIIort
O SoItware product overview
O Relevant related document list, such as requirements, design documents, other test plans,
etc
O Relevant standards or legal requirements
O Traceability requirements
O Relevant naming conventions and identiIier conventions
O Overall soItware project organization and personnel/contact-inIo/responsibilties
O Test organization and personnel/contact-inIo/responsibilities
O Assumptions and dependencies
O Project risk analysis
O Testing priorities and Iocus
O Scope and limitations oI testing
O Test outline - a decomposition oI the test approach by test type, Ieature, Iunctionality,
process, system, module, etc as applicable
O Outline oI data input equivalence classes, boundary value analysis, error classes
O Test environment - hardware, operating systems, other required soItware, data
conIigurations, interIaces to other systems
O Test environment validity analysis - diIIerences between the test and production systems
and their impact on test validity
O Test environment setup and conIiguration issues
O SoItware migration processes
O SoItware CM processes
O Test data setup requirements
O Database setup requirements
O Outline oI system-logging/error-logging/other capabilities, and tools such as screen
capture soItware, that will be used to help describe and report bugs
O Discussion oI any specialized soItware or hardware tools that will be used by testers to
help track the cause or source oI bugs
O Test automation - justiIication and overview
O Test tools to be used, including versions, patches, etc
O Test script/test code maintenance processes and version control
O Problem tracking and resolution - tools and processes
O Project test metrics to be used
O Reporting requirements and testing deliverables
O SoItware entrance and exit criteria
O Initial sanity testing period and criteria
O Test suspension and restart criteria
O Personnel allocation
O Personnel pre-training needs
O Test site/location
O Outside test organizations to be utilized and their purpose, responsibilties, deliverables,
contact persons, and coordination issues
O Relevant proprietary, classiIied, security, and licensing issues
O Open issues
O Appendix - glossary, acronyms, etc
(See the SoItwareqatestcom ookstore section's 'SoItware Testing' and 'SoItware QA' categories
Ior useIul books with more inIormation
Return to top oI this page's FAQ list
at's a 'test case'?
A test case describes an input, action, or event and an expected response, to determine iI a
Ieature oI a soItware application is working correctly A test case may contain particulars such as
test case identiIier, test case name, objective, test conditions/setup, input data requirements,
steps, and expected results The level oI detail may vary signiIicantly depending on the
organization and project context
Note that the process oI developing test cases can help Iind problems in the requirements or
design oI an application, since it requires completely thinking through the operation oI the
application For this reason, it's useIul to prepare test cases early in the development cycle iI
possible
Return to top oI this page's FAQ list
at sould be done after a bug is found?
The bug needs to be communicated and assigned to developers that can Iix it AIter the problem
is resolved, Iixes should be re-tested, and determinations made regarding requirements Ior
regression testing to check that Iixes didn't create problems elsewhere II a problem-tracking
system is in place, it should encapsulate these processes A variety oI commercial problem-
tracking/management soItware tools are available (see the 'Tools' section Ior web resources with
listings oI such tools The Iollowing are items to consider in the tracking process
O Complete inIormation such that developers can understand the bug, get an idea oI it's
severity, and reproduce it iI necessary
O ug identiIier (number, ID, etc
O Current bug status (eg, 'Released Ior Retest', 'New', etc
O The application name or identiIier and version
O The Iunction, module, Ieature, object, screen, etc where the bug occurred
O Environment speciIics, system, platIorm, relevant hardware speciIics
O Test case name/number/identiIier
O One-line bug description
O Full bug description
O Description oI steps needed to reproduce the bug iI not covered by a test case or iI the
developer doesn't have easy access to the test case/test script/test tool
O Names and/or descriptions oI Iile/data/messages/etc used in test
O File excerpts/error messages/log Iile excerpts/screen shots/test tool logs that would be
helpIul in Iinding the cause oI the problem
O Severity estimate (a 5-level range such as -5 or 'critical'-to-'low' is common
O Was the bug reproducible?
O Tester name
O Test date
O ug reporting date
O Name oI developer/group/organization the problem is assigned to
O Description oI problem cause
O Description oI Iix
O Code section/Iile/module/class/method that was Iixed
O Date oI Iix
O Application version that contains the Iix
O Tester responsible Ior retest
O Retest date
O Retest results
O Regression testing requirements
O Tester responsible Ior regression tests
O Regression testing results
A reporting or tracking process should enable notiIication oI appropriate personnel at various
stages For instance, testers need to know when retesting is needed, developers need to know
when bugs are Iound and how to get the needed inIormation, and reporting/summary capabilities
are needed Ior managers
Return to top oI this page's FAQ list
at is 'configuration management'?
ConIiguration management covers the processes used to control, coordinate, and track code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes (See the
'Tools' section Ior web resources with listings oI conIiguration management tools Also see the
SoItwareqatestcom ookstore section's 'ConIiguration Management' category Ior useIul books
with more inIormation
Return to top oI this page's FAQ list
at if te software is so buggy it can't really be tested at all?
The best bet in this situation is Ior the testers to go through the process oI reporting whatever
bugs or blocking-type problems initially show up, with the Iocus being on critical bugs Since
this type oI problem can severely aIIect schedules, and indicates deeper problems in the soItware
development process (such as insuIIicient unit testing or insuIIicient integration testing, poor
design, improper build or release procedures, etc managers should be notiIied, and provided
with some documentation as evidence oI the problem
Return to top oI this page's FAQ list
How can it be known wen to stop testing?
This can be diIIicult to determine Most modern soItware applications are so complex, and run in
such an interdependent environment, that complete testing can never be done Common Iactors
in deciding when to stop are
O Deadlines (release deadlines, testing deadlines, etc
O Test cases completed with certain percentage passed
O Test budget depleted
O Coverage oI code/Iunctionality/requirements reaches a speciIied point
O ug rate Ialls below a certain level
O eta or alpha testing period ends
Also see 'Who should decide when soItware is ready to be released?' in the LFAQ section
Return to top oI this page's FAQ list
at if tere isn't enoug time for toroug testing?
Use risk analysis, along with discussion with project stakeholders, to determine where testing
should be Iocused
Since it's rarely possible to test every possible aspect oI an application, every possible
combination oI events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most soItware development projects This requires judgement skills, common
sense, and experience (II warranted, Iormal methods are also available Considerations can
include
O Which Iunctionality is most important to the project's intended purpose?
O Which Iunctionality is most visible to the user?
O Which Iunctionality has the largest saIety impact?
O Which Iunctionality has the largest Iinancial impact on users?
O Which aspects oI the application are most important to the customer?
O Which aspects oI the application can be tested early in the development cycle?
O Which parts oI the code are most complex, and thus most subject to errors?
O Which parts oI the application were developed in rush or panic mode?
O Which aspects oI similar/related previous projects caused problems?
O Which aspects oI similar/related previous projects had large maintenance expenses?
O Which parts oI the requirements and design are unclear or poorly thought out?
O What do the developers think are the highest-risk aspects oI the application?
O What kinds oI problems would cause the worst publicity?
O What kinds oI problems would cause the most customer service complaints?
O What kinds oI tests could easily cover multiple Iunctionalities?
O Which tests will have the best high-risk-coverage to time-required ratio?
Return to top oI this page's FAQ list
at if te project isn't big enoug to justify extensive testing?
Consider the impact oI project errors, not the size oI the project However, iI extensive testing is
still not justiIied, risk analysis is again needed and the same considerations as described
previously in 'What iI there isn't enough time Ior thorough testing?' apply The tester might then
do ad hoc or exploratory testing, or write up a limited test plan based on the risk analysis
Return to top oI this page's FAQ list
How does a client/server environment affect testing?
Client/server applications can be highly complex due to the multiple dependencies among
clients, data communications, hardware, and servers, especially in multi-tier systems Thus
testing requirements can be extensive When time is limited (as it usually is the Iocus should be
on integration and system testing Additionally, load/stress/perIormance testing may be useIul in
determining client/server application limitations and capabilities There are commercial and open
source tools to assist with such testing (See the 'Tools' section Ior web resources with listings
that include these kinds oI test tools
Return to top oI this page's FAQ list
How can orld ide eb sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients
Consideration should be given to the interactions between html pages, web services, encrypted
communications, Internet connections, Iirewalls, applications that run in web pages (such as
javascript, Ilash, other plug-in applications, the wide variety oI applications that could run on
the server side, etc Additionally, there are a wide variety oI servers and browsers, mobile
platIorms, various versions oI each, small but sometimes signiIicant diIIerences between them,
variations in connection speeds, rapidly changing technologies, and multiple standards and
protocols The end result is that testing Ior web sites can become a major ongoing eIIort Other
considerations might include
O What are the expected loads on the server, and what kind oI perIormance is required
under such loads (such as web server response time, database query response times
What kinds oI tools will be needed Ior perIormance testing (such as web load testing
tools, other tools already in house that can be adapted, load generation appliances, etc?
O Who is the target audience? What kind and version oI browsers will they be using, and
how extensively should testing be Ior these variations? What kind oI connection speeds
will they by using? Are they intra- organization (thus with likely high connection speeds
and similar browsers or Internet-wide (thus with a wider variety oI connection speeds
and browser types?
O What kind oI perIormance is expected on the client side (eg, how Iast should pages
appear, how Iast should Ilash, applets, etc load and run?
O Will down time Ior server and content maintenance/upgrades be allowed? how much?
O What kinds oI security (Iirewalls, encryption, passwords, Iunctionality, etc will be
required and what is it expected to do? How can it be tested?
O What internationilization/localization/language requirements are there, and how are they
to be veriIied?
O How reliable are the site's Internet connections required to be? And how does that aIIect
backup system or redundant connection requirements and testing?
O What processes will be required to manage updates to the web site's content, and what are
the requirements Ior maintaining, tracking, and controlling page content, graphics, links,
etc?
O Which HTML and related speciIication will be adhered to? How strictly? What variations
will be allowed Ior targeted browsers?
O Will there be any standards or requirements Ior page appearance and/or graphics, 58
compliance, etc throughout a site or parts oI a site?
O Will there be any development practices/standards utilized Ior web page components and
identiIiers, which can signiIicantly impact test automation
O How will internal and external links be validated and updated? how oIten?
O Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, connection variabilities,
and real-world internet 'traIIic congestion' problems to be accounted Ior in testing?
O How extensive or customized are the server logging and reporting requirements; are they
considered an integral part oI the system and do they require testing?
O How are Ilash, applets, javascripts, ActiveX components, etc to be maintained, tracked,
controlled, and tested?
Some sources oI web site security inIormation include the Usenet newsgroup
'compsecurityannounce' and links concerning web site security in the 'Other Resources' section
Hundreds oI web site test tools are available and more than 8 oI them are listed in the 'Web
Test Tools' section
Return to top oI this page's FAQ list
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace Irom code to internal design
to Iunctional design to requirements While there will be little aIIect on black box testing (where
an understanding oI the internal design oI the application is unnecessary, white-box testing can
be oriented to the application's objects, methods, etc II the application was well-designed this
can simpliIy test design and test automation design
Return to top oI this page's FAQ list
at is Extreme Programming and wat's it got to do wit testing?
Extreme Programming (XP is a soItware development approach Ior small teams on risk-prone
projects with unstable requirements It was created by Kent eck who described the approach in
his book 'Extreme Programming Explained' (See the SoItwareqatestcom ooks page Testing
('extreme testing' is a core aspect oI Extreme Programming Programmers are expected to write
unit and Iunctional test code Iirst - beIore writing the application code Test code is under source
control along with the rest oI the code Customers are expected to be an integral part oI the
project team and to help develope scenarios Ior acceptance/black box testing Acceptance tests
are preIerably automated, and are modiIied and rerun Ior each oI the Irequent development
iterations QA and test personnel are also required to be an integral part oI the project team
Detailed requirements documentation is not used, and Irequent re-scheduling, re-estimating, and
re-prioritizing is expected For more inIo on XP and other 'agile' soItware development
approaches (Scrum, Crystal, etc see the resource listings in the 'Agile and XP Testing
Resources' section
Return to top oI this page's FAQ list
O What is 'SoItware Quality Assurance'?
O What is 'SoItware Testing'?
O What are some recent major computer system Iailures caused by soItware bugs?
O Does every soItware project need testers?
O Why does soItware have bugs?
O How can new SoItware QA processes be introduced in an existing organization?
O What is veriIication? validation?
O What is a 'walkthrough'?
O What's an 'inspection'?
O What kinds oI testing should be considered?
O What are 5 common problems in the soItware development process?
O What are 5 common solutions to soItware development problems?
O What is soItware 'quality'?
O What is 'good code'?
O What is 'good design'?
O What is SEI? CMM? CMMI? ISO? Will it help?
O What is the 'soItware liIe cycle'?
FAQ - SoItware QA and Testing Frequently-Asked-Questions Part
O What makes a good SoItware Test engineer?
O What makes a good SoItware QA engineer?
O What makes a good QA or Test manager?
O What's the role oI documentation in QA?
O What's the big deal about 'requirements'?
O What steps are needed to develop and run soItware tests?
O What's a 'test plan'?
O What's a 'test case'?
O What should be done aIter a bug is Iound?
O What is 'conIiguration management'?
O What iI the soItware is so buggy it can't really be tested at all?
O How can it be known when to stop testing?
O What iI there isn't enough time Ior thorough testing?
O What iI the project isn't big enough to justiIy extensive testing?
O How does a client/server environment aIIect testing?
O How can World Wide Web sites be tested?
O How is testing aIIected by object-oriented designs?
O What is Extreme Programming and what's it got to do with testing?
LFAQ - SoItware QA and Testing Less-Frequently-Asked-Questions
O Why is it oIten hard Ior organizations to get serious about quality assurance?
O Who is responsible Ior risk management?
O Who should decide when soItware is ready to be released?
O What can be done iI requirements are changing continuously?
O What iI the application has Iunctionality that wasn't in the requirements?
O How can QA processes be implemented without reducing productivity?
O What iI an organization is growing so Iast that Iixed QA processes are impossible?
O Will automated testing tools make testing easier?
O What's the best way to choose a test automation tool?
O How can it be determined iI a test environment is appropriate?
O What's the best approach to soItware test estimation?
Resources - Other SoItware QA and Testing Resources
O Top Resources
O SoItware QA and Testing-related Organizations/CertiIications/ConIerences
O Links to QA and Testing-related Magazines/Publications
O General SoItware QA and Testing Resources
O Agile and XP Testing Resources
O Test Automation Resources
O Mobile Testing Resources
O Web QA and Testing Resources
O Web Security Testing Resources
O Web Usability Resources
Tools - SoItware QA and Test Tools
O Test tools
O CM tools and PM tools
O Web site test and management tools
Web Tools - Web Site Test Tools and Site Management Tools
O Load and perIormance test tools
O ava test tools
O HTML Validators
O Link Checkers
O Free On-the-Web HTML Validators and Link Checkers
O PERL and C Programs Ior Validating and Checking
O Web Functional/Regression Test Tools
O Web Site Security Test Tools
O External Site Monitoring Services
O Web Site Management Tools
O Log Analysis Tools
O Mobile Web/App Testing Tools
O Other Web Test Tools
obs & News - obs and News
O Web ob oards useIul to QA and Test Engineers
O Latest News Headlines -- Technology, SoItware Development, Computer Security, Tech
Stocks, more
ookstore - SoItware QA and Testing ookstore
O SoItware Testing ooks
O SoItware Test Automation ooks
O SoItware Security Testing ooks
O SoItware Load Testing ooks
O SoItware Quality Assurance ooks
O SoItware Requirements Engineering ooks
O SoItware Metrics ooks
O ConIiguration Management ooks
O SoItware Risk Management ooks
O SoItware Engineering ooks
O SoItware Project Management ooks
O Technical ackground asics ooks
O Other ooks
ttp://www.softwareqatest.com/index.tml

You might also like