You are on page 1of 8

SOFTWARE TESTING GLOSSARY

1 acceptance testing: Formal testing conducted to 2 actual outcome: The behavior actually produced
enable a user, customer, or other authorized entity to when the object is tested under specified conditions.
determine whether to accept a system or component.
[IEEE]

3 ad hoc testing: Testing carried out using no 4 alpha testing: Simulated or actual operational testing
recognised test case design technique. at an in-house site not otherwise involved with the
software developers.

5 arc testing: See branch testing. 6 Backus-Naur form: A metalanguage used to formally
describe the syntax of a language. See BS 6154.

7 basic block: A sequence of one or more consecutive, 8 basis test set: A set of test cases derived from the
executable statements containing no branches. code logic which ensure that 100 % branch coverage is
achieved.

9 bebugging: See error seeding. [Abbott] 11 beta testing: Operational testing at a site not
otherwise involved with the software developers.

10 behaviour: The combination of input values and 12 big-bang testing: Integration testing where no
preconditions and the required response for a function incremental testing takes place prior to all the system's
of a system. The full specification of a function would components being combined to form the system.
normally comprise one or more behaviours.

13 black box testing: See functional test case design. 15 boundary value: An input value or output value
which is on the boundary between equivalence classes,
or an incremental distance either side of the boundary.
16 boundary value analysis: A test case design
14 bottom-up testing: An approach to integration
technique for a component in which test cases are
testing where the lowest level components are tested
designed which include representatives of boundary
first, then used to facilitate the testing of higher level
values.
components. The process is repeated until the
component at the top of the hierarchy is tested.

17 boundary value coverage: The percentage of 18 boundary value testing: See boundary value
boundary values of the component's equivalence classes analysis.
which have been exercised by a test case suite.

19 branch: A conditional transfer of control from any 20 branch condition: See decision condition.
statement to any other statement in a component, or an
unconditional transfer of control from any statement to
any other statement in the component except the next
statement, or when a component has more than one
entry point, a transfer of control to an entry point of the
component.

22 branch condition combination testing: A test case 21 branch condition combination coverage: The
design technique in which test cases are designed to percentage of combinations of all branch condition
execute combinations of branch condition outcomes. outcomes in every decision that have been exercised by
a test case suite.

24 branch condition testing: A test case design 23 branch condition coverage: The percentage of
technique in which test cases are designed to execute branch condition outcomes in every decision that have
branch condition outcomes. been exercised by a test case suite.

26 branch outcome: See decision outcome. 25 branch coverage: The percentage of branches that
have been exercised by a test case suite
28 branch testing: A test case design technique for a 27 branch point: See decision.
component in which test cases are designed to execute
branch outcomes.
29 bug: See fault. 30 bug seeding: See error seeding.

31 C-use: See computation data use. 32 capture/playback tool: A test tool that records test
input as it is sent to the software under test. The input
cases stored can then be used to reproduce the test at a
later time.

33 capture/replay tool: See capture/playback tool. 34 CAST: Acronym for computer-aided software
testing.

35 cause-effect graph: A graphical representation of 36 cause-effect graphing: A test case design technique
inputs or stimuli (causes) with their associated outputs in which test cases are designed by consideration of
(effects), which can be used to design test cases. cause-effect graphs.

37 certification: The process of confirming that a 38 Chow's coverage metrics: See N-switch coverage.
system or component complies with its specified [Chow]
requirements and is acceptable for operational use.
From [IEEE].

39 code coverage: An analysis method that determines 40 code-based testing: Designing tests based on
which parts of the software have been executed objectives derived from the implementation (e.g., tests
(covered) by the test case suite and which parts have that execute specific control flow paths or use specific
not been executed and therefore may require additional data items).
attention.

41 compatibility testing: Testing whether the system is 42 complete path testing: See exhaustive testing.
compatible with other systems with which it should
communicate.

44 component testing: The testing of individual 43 component: A minimal software item for which a
software components. After [IEEE]. separate specification is available.

46 condition: A Boolean statement containing no 45 computation data use: A data use not in a
Boolean operators. For instance, A<B is a condition but condition. Also called C-use.
A and B is not.

48 condition outcome: The evaluation of a condition to 47 condition coverage: See branch condition coverage.
TRUE or FALSE.

49 conformance criterion: Some method of judging 50 conformance testing: The process of testing that an
whether or not the component's action on a particular implementation conforms to the specification on which
specified input value conforms to the specification. it is based.

51 control flow: An abstract representation of all 52 control flow graph: The diagrammatic
possible sequences of events in a program's execution. representation of the possible alternative control flow
paths through a component.

53 control flow path: See path. 54 conversion testing: Testing of programs or


procedures used to convert data from existing systems
for use in replacement systems.

55 correctness: The degree to which software 56 coverage: The degree, expressed as a percentage, to
conforms to its specification. which a specified coverage item has been exercised by a
test case suite.
57 coverage item: An entity or property used as a basis 58 data definition: An executable statement where a
for testing. variable is assigned a value.

59 data definition C-use coverage: The percentage of 60 data definition C-use pair: A data definition and
data definition C-use pairs in a component that are computation data use, where the data use uses the value
exercised by a test case suite. defined in the data definition.

61 data definition P-use coverage: The percentage of 62 data definition P-use pair: A data definition and
data definition P-use pairs in a component that are predicate data use, where the data use uses the value
exercised by a test case suite. defined in the data definition.

63 data definition-use coverage: The percentage of 64 data definition-use pair: A data definition and data
data definition-use pairs in a component that are use, where the data use uses the value defined in the
exercised by a test case suite. data definition.

65 data definition-use testing: A test case design 66 data flow coverage: Test coverage measure based
technique for a component in which test cases are on variable usage within the code. Examples are data
designed to execute data definition-use pairs. definition-use coverage, data definition P-use coverage,
data definition C-use coverage, etc.

67 data flow testing: Testing in which test cases are 68 data use: An executable statement where the value
designed based on variable usage within the code. of a variable is accessed.

69 debugging: The process of finding and removing 70 decision: A program point at which the control flow
the causes of failures in software. has two or more alternative routes.

71 Decision condition: A condition within a decision. 72 decision coverage: The percentage of decision
outcomes that have been exercised by a test case suite.

73 decision outcome: The result of a decision (which 74 design-based testing: Designing tests based on
therefore determines the control flow alternative taken). objectives derived from the architectural or detail design
of the software (e.g., tests that execute specific
invocation paths or probe the worst case behaviour of
algorithms).

75 desk checking: The testing of software by the 76 dirty testing: See negative testing. [Beizer]
manual simulation of its execution.

77 documentation testing: Testing concerned with the 78 domain: The set from which values are selected.
accuracy of documentation.

79 domain testing: See equivalence partition testing. 80 dynamic analysis: The process of evaluating a
system or component based upon its behaviour during
execution.

81 emulator: A device, computer program, or system 82 entry point: The first executable statement within a
that accepts the same inputs and produces the same component.
outputs as a given system.

83 equivalence class: A portion of the component's 84 equivalence partition: See equivalence class.
input or output domains for which the component's
behaviour is assumed to be the same from the
component's specification.

85 equivalence partition coverage: The percentage of 86 equivalence partition testing: A test case design
equivalence classes generated for the component, which technique for a component in which test cases are
have been exercised by a test case suite. designed to execute representatives from equivalence
classes.
87 error: A human action that produces an incorrect 88 error guessing: A test case design technique where
result. [IEEE] the experience of the tester is used to postulate what
faults might occur, and to design tests specifically to
expose them.

89 error seeding: The process of intentionally adding 90 executable statement: A statement which, when
known faults to those already in a computer program compiled, is translated into object code, which will be
for the purpose of monitoring the rate of detection and executed procedurally when the program is running and
removal, and estimating the number of faults remaining may perform an action on program data.
in the program.

91 exercised: A program element is exercised by a test 92 exhaustive testing: A test case design technique in
case when the input value causes the execution of that which the test case suite comprises all combinations of
element, such as a statement, branch, or other structural input values and preconditions for component variables.
element.

93 exit point: The last executable statement within a 94 expected outcome: See predicted outcome.
component.

95 facility testing: See functional test case design. 96 failure: Deviation of the software from its expected
delivery or service.

97 fault: A manifestation of an error in software. A 98 feasible path: A path for which there exists a set of
fault, if encountered may cause a failure. input values and execution conditions which causes it to
be executed.

99 feature testing: See functional test case design. 100 functional specification: The document that
describes in detail the characteristics of the product with
regard to its intended capability. [BS 4778, Part2]

101 functional test case design: Test case selection 102 glass box testing: See structural test case design.
that is based on an analysis of the specification of the
component without reference to its internal workings.

103 incremental testing: Integration testing where 104 independence: Separation of responsibilities which
system components are integrated into the system one ensures the accomplishment of objective evaluation.
at a time until the entire system is integrated. After [do178b].

105 infeasible path: A path which cannot be exercised 106 input: A variable (whether stored within a
by any set of possible input values. component or outside it) that is read by the component.

107 input domain: The set of all possible inputs. 108 input value: An instance of an input.

109 inspection: A group review quality improvement 110 installability testing: Testing concerned with the
process for written material. It consists of two aspects; installation procedures for the system.
product (document itself) improvement and process
improvement (of both document production and
inspection). After [Graham]

111 instrumentation: The insertion of additional code 112 instrumenter: A software tool used to carry out
into the program in order to collect information about instrumentation.
program behaviour during program execution.

113 integration: The process of combining components 114 integration testing: Testing performed to expose
into larger assemblies. faults in the interfaces and in the interaction between
integrated components.
115 interface testing: Integration testing where the 116 isolation testing: Component testing of individual
interfaces between system components are tested. components in isolation from surrounding components,
with surrounding components being simulated by stubs.

117 LCSAJ: A Linear Code Sequence And Jump, 118 LCSAJ coverage: The percentage of LCSAJs of a
consisting of the following three items (conventionally component which are exercised by a test case suite.
identified by line numbers in a source code listing): the
start of the linear sequence of executable statements,
the end of the linear sequence, and the target line to
which control flow is transferred at the end of the linear
sequence.

119 LCSAJ testing: A test case design technique for a 120 logic-coverage testing: See structural test case
component in which test cases are designed to execute design. [Myers]
LCSAJs.

121 logic-driven testing: See structural test case 122 maintainability testing: Testing whether the
design. system meets its specified objectives for maintainability.

123 modified condition/decision coverage: The 124 modified condition/decision testing: A test case
percentage of all branch condition outcomes that design technique in which test cases are designed to
independently affect a decision outcome that have been execute branch condition outcomes that independently
exercised by a test case suite. affect a decision outcome.

125 multiple condition coverage: See branch 126 mutation analysis: A method to determine test case
condition combination coverage. suite thoroughness by measuring the extent to which a
test case suite can discriminate the program from slight
variants (mutants) of the program. See also error
seeding.

127 N-switch coverage: The percentage of sequences 128 N-switch testing: A form of state transition testing
of N-transitions that have been exercised by a test case in which test cases are designed to execute all valid
suite. sequences of N-transitions.

129 N-transitions: A sequence of N+1 transitions. 130 negative testing: Testing aimed at showing
software does not work. [Beizer]

131 non-functional requirements testing: Testing of 132 operational testing: Testing conducted to evaluate
those requirements that do not relate to functionality. a system or component in its operational environment.
i.e. performance, usability, etc. [IEEE]

133 oracle: A mechanism to produce the predicted 134 outcome: Actual outcome or predicted outcome.
outcomes to compare with the actual outcomes of the This is the outcome of a test. See also branch outcome,
software under test. After [adrion] condition outcome and decision outcome.

135 output: A variable (whether stored within a 136 output domain: The set of all possible outputs.
component or outside it) that is written to by the
component.

137 output value: An instance of an output. 138 P-use: See predicate data use.

139 partition testing: See equivalence partition testing. 140 path: A sequence of executable statements of a
[Beizer] component, from an entry point to an exit point.

141 path coverage: The percentage of paths in a 142 path sensitizing: Choosing a set of input values to
component exercised by a test case suite. force the execution of a component to take a given path.
143 path testing: A test case design technique in which 144 performance testing: Testing conducted to
test cases are designed to execute paths of a component. evaluate the compliance of a system or component with
specified performance requirements. [IEEE]

145 portability testing: Testing aimed at 146 precondition: Environmental and state conditions
demonstrating the software can be ported to specified which must be fulfilled before the component can be
hardware or software platforms. executed with a particular input value.

147 predicate: A logical statement which evaluates to 148 predicate data use: A data use in a predicate.
TRUE or FALSE, normally to direct the execution path
in code.

149 predicted outcome: The behaviour predicted by 150 program instrumenter: See instrumenter.
the specification of an object under specified
conditions.

151 progressive testing: Testing of new features after 152 pseudo-random: A series which appears to be
regression testing of previous features. [Beizer] random but is in fact generated according to some
prearranged sequence.

153 recovery testing: Testing aimed at verifying the 154 regression testing: Retesting of a previously tested
system's ability to recover from varying degrees of program following modification to ensure that faults
failure. have not been introduced or uncovered as a result of the
changes made.

155 requirements-based testing: Designing tests 156 result: See outcome.


based on objectives derived from requirements for the
software component (e.g., tests that exercise specific
functions or probe the non-functional constraints such
as performance or security). See functional test case
design.

157 review: A process or meeting during which a work 158 security testing: Testing whether the system meets
product, or set of work products, is presented to project its specified security objectives.
personnel, managers, users or other interested parties
for comment or approval. [ieee]

159 serviceability testing: See maintainability testing. 160 simple subpath: A subpath of the control flow
graph in which no program part is executed more than
necessary.

161 simulation: The representation of selected 162 simulator: A device, computer program or system
behavioural characteristics of one physical or abstract used during software verification, which behaves or
system by another system. [ISO 2382/1]. operates like a given system when provided with a set
of controlled inputs. [IEEE,do178b]

163 source statement: See statement. 164 specification: A description of a component's


function in terms of its output values for specified input
values under specified preconditions.

165 specified input: An input for which the 166 state transition: A transition between two
specification predicts an outcome. allowable states of a system or component.

167 state transition testing: A test case design 168 statement: An entity in a programming language
technique in which test cases are designed to execute which is typically the smallest indivisible unit of
state transitions. execution.

169 statement coverage: The percentage of executable 170 statement testing: A test case design technique for
statements in a component that have been exercised by a component in which test cases are designed to execute
a test case suite. statements.
171 static analysis: Analysis of a program carried out 172 static analyzer: A tool that carries out static
without executing the program. analysis.

173 static testing: Testing of an object without 174 statistical testing: A test case design technique in
execution on a computer. which a model is used of the statistical distribution of
the input to construct representative test cases.

175 storage testing: Testing whether the system meets 176 stress testing: Testing conducted to evaluate a
its specified storage objectives. system or component at or beyond the limits of its
specified requirements. [IEEE]

177 structural coverage: Coverage measures based on 178 structural test case design: Test case selection that
the internal structure of the component. is based on an analysis of the internal structure of the
component.

179 structural testing: See structural test case design. 180 structured basis testing: A test case design
technique in which test cases are derived from the code
logic to achieve 100% branch coverage.

181 structured walkthrough: See walkthrough. 182 stub: A skeletal or special-purpose implementation
of a software module, used to develop or test a
component that calls or is otherwise dependent on it.
After [IEEE].

183 subpath: A sequence of executable statements 184 symbolic evaluation: See symbolic execution.
within a component.

185 symbolic execution: A static analysis technique 186 syntax testing: A test case design technique for a
that derives a symbolic statement for program paths. component or system in which test case design is based
upon the syntax of the input.

187 system testing: The process of testing an 188 technical requirements testing: See non-
integrated system to verify that it meets specified functional requirements testing.
requirements. [Hetzel]

189 test automation: The use of software to control the 190 test case: A set of inputs, execution preconditions,
execution of tests, the comparison of actual outcomes to and expected outcomes developed for a particular
predicted outcomes, the setting up of test preconditions, objective, such as to exercise a particular program path
and other test control and test reporting functions. or to verify compliance with a specific requirement.
After [IEEE,do178b]

191 test case design technique: A method used to 192 test case suite: A collection of one or more test
derive or select test cases. cases for the software under test.

193 test comparator: A test tool that compares the 194 test completion criterion: A criterion for
actual outputs produced by the software under test with determining when planned testing is complete, defined
the expected outputs for that test case. in terms of a test measurement technique.

195 test coverage: See coverage. 196 test driver: A program or test tool used to execute
software against a test case suite.

197 test environment: A description of the hardware 198 test execution: The processing of a test case suite
and software environment in which the tests will be run, by the software under test, producing an outcome.
and any other software with which the software under
test interacts when under test including stubs and test
drivers.

199 test execution technique: The method used to 200 test generator: A program that generates test cases
perform the actual test execution, e.g. manual, in accordance to a specified strategy or heuristic.
capture/playback tool, etc.
201 test harness: A testing tool that comprises a test 202 test measurement technique: A method used to
driver and a test comparator. measure test coverage items.

203 test outcome: See outcome. 204 test plan: A record of the test planning process
detailing the degree of tester indedendence, the test
environment, the test case design techniques and test
measurement techniques to be used, and the rationale
for their choice.

205 test procedure: A document providing detailed 206 test records: For each test, an unambiguous record
instructions for the execution of one or more test cases. of the identities and versions of the component under
test, the test specification, and actual outcome.

207 test script: Commonly used to refer to the 208 test specification: For each test case, the coverage
automated test procedure used with a test harness. item, the initial state of the software under test, the
input, and the predicted outcome.

209 test target: A set of test completion criteria. 210 testing: The process of exercising software to
verify that it satisfies specified requirements and to
detect errors.

211 thread testing: A variation of top-down testing 212 top-down testing: An approach to integration
where the progressive integration of components testing where the component at the top of the
follows the implementation of subsets of the component hierarchy is tested first, with lower level
requirements, as opposed to the integration of components being simulated by stubs. Tested
components by successively lower levels. components are then used to test lower level
components. The process is repeated until the lowest
level components have been tested.

213 unit testing: See component testing. 214 usability testing: Testing the ease with which users
can learn and use a product.

215 validation: Determination of the correctness of the 216 verification: The process of evaluating a system or
products of software development with respect to the component to determine whether the products of the
user needs and requirements. given development phase satisfy the conditions imposed
at the start of that phase. [IEEE]

217 volume testing: Testing where the system is 218 walkthrough: A review of requirements, designs or
subjected to large volumes of data. code characterized by the author of the object under
review guiding the progression of the review.

You might also like