You are on page 1of 9

Gregory Morse

Software Quality and Testing study structure, lists and important terms

Why test provide confidence, provide understanding overall, provide sufficient information for
objective decisions, establish extent of requirements met, establish degree of quality

Error/mistake (by people), Fault/Defect/Bug (result of error), Failure (when fault executes), Incident
(behavior of fault)

Verification (demonstrate consistency, completeness, correctness of software artifacts at each stage of


software lifecycle, manual inspection, testing, formal methods, am I building the product right),
validation (process of evaluating software at end of software development to ensure compliance of
customer requirements, based on customer requirements, am I building the right product)

Quality (total characteristics of ability to satisfy stated or implied needs, testing does not build but
determines) Testability, Maintainability, Modularity, Reliability (mean time before failure (MTBF)),
Efficiency, Usability, Reusability, Legal requirements/standards, etc. External (10%) vs internal quality
(90%) like an iceberg.

Economics of testing focus on time-to-market (revenue starts) vs focus on time-to-profit (revenue


exceeds costs)

Software testing certify quality, finding faults/failures, process to verify software satisfies
requirements, examination of the behavior of program

Testing not debugging, static and dynamic, process, set of techniques, generally not possible to prove
no faults, helps locate errors, should be repeatable

Exhaustive testing all possible data combinations/preconditions too expensive/impractical amount of


time

Test data one element of input domain, test data set finite set of test data, exhaustive testing when
all of test data input domain tested, power of test data context dependent not just based on number
of elements, nave random testing (on uniform distribution) is hopeless generally

Time/quality/money triangle choose 2 cannot have all 3.

Cost of testing single failure may incur little cost or millions, extreme cases error can cost lives, safety
critical systems are tested rigorously

Product lifecycle phases requirements, design, coding, testing, maintenance bugs more expensive at
later phase

Testing principles: Exhaustive testing impossible, Early testing (start as soon as possible), Testing is
context dependent, Pesticide paradox (same set of tests will not find new defects), Pareto
principal/defect clustering (80% of faults occur in 20% of code), Absence of errors fallacy (just because
testing results are good does not mean software is good), Presence of defects (testing only shows
presence of bugs not absence Dijkstra), Testing process to add value to product, All tests should be
traceable, Tests must be ranked (do best testing in time available)
How to prioritize test where failures most visible, most likely, ask customer to prioritize, critical to
customers business, areas changed most often, areas with most past problems, most complex or
technically critical areas

Typical life-cycle phase: requirement specification, conceptual plan, architectural design, detailed
design, component development, integration, system qualification, release, system operation &
maintenance, retirement/disposal

Software development models Sequential (waterfall, V model), non-sequential (W model), iterative


(learning from previous iteration)/incremental (breaking large chunks of work to smaller ones)
(prototyping, Rapid Application Development (RAD), Unified Process (UP, previously Rational UP - RUP),
Agile (eXtreme Programming (XP), Scrum, Kanban))

Agile manifesto Individuals and interactions > Processes and tools, Working Product > Comprehensive
Documentation, Customer Collaboration > Contract Negotiation, Responding to change > Following a
plan.

XP Principles Planning game, Small releases, Metaphor, Test before coding (test driven development
(TDD)), Refactoring, Pair programming, Common ownership of code, Continuous integration, 40 hours
work per week, Internal client, Coding regulations

Test driven development cycle Red (write test, watch fail), Green (implement, watch pass), Refactor
(refactor, watch pass)

SCRUM principles Split organization into small, cross-functional, self-organizing teams, Split work int
small concrete deliverables, Split time into short-fixed length iterations (usually 1-4 weeks), Optimize
the release plan and update priorities in collaboration with customer, Optimize the process with a
retrospective

Testing life-cycle Initiate, Planning, Execution, Control


(Preparation/Specification/Execution/Completion lay on top of Infrastructure), Close

Test scenario high level classification of test requirements grouped by functionality, identifier,
conditions satisfied, scenario description, output

Test case Identifier, test case owner/creator, version & date, name of test case, requirement
identifier/link, purpose, priority, dependencies, testing environment/configuration, initialization
(precondition), finalization (postcondition), executed by, expected average execution duration, actions,
input data (precondition), expected results (postcondition), actual results (after running), status
pass/fail, note

Testing-by-contract (based on design-by-contract, create test conditions only where preconditions are
met), defensive test case design (tests both normal and abnormal preconditions)

Good test case properties accurate, economic, effective, exemplary, evolvable, executable, repeatable,
reusable, traceable

Test condition item or event that can be verified by one or more test cases, Test basis source of
information or document to write tests, Test script/procedure sequences of actions of a test, testware
artifacts produced during test process. Identify test condition (what), specify test scenario and test
cases (with which, one or more test cases for a test scenario), specify test procedure (how)

Levels of testing unit (smallest testable part, constructed in isolation, functionality and non-functional
characteristics, structural testing, robustness degree to which operates correctly, memory leak poor
memory management not freeing up used memory, supported by tools), integration (expose defects in
interfaces and interactions between components and systems, component integration, system
integration), system, acceptance

Integration testing types stubs (inputs to code)/mocks (output from code)/fakes (working objects that
take shortcuts) can be used with different integration models: big-bang, bottom-up, top-down, inside-
out, outside-in, branch-wise strategy

System testing types focuses on whole system or product in live environment, non-functional tests
(installability, interoperability, maintainability, portability, recovery, reliability, usability, load
(spike/stress/stability), soak/endurance, volume, configuration, compatibility, environment)

Acceptance testing types provide end users confidence that system is according to expectations, client
involvement, alpha (internal acceptance testing), beta (external acceptance testing)

Installation testing (client involvement), System In Use (user tested) deployment

Maintenance testing (implies a live environment) Additional features, New faults being found, Retired
system, modification of live environment, adapt product to modified environment, impact analysis

Traceability traceability matrix (link requirements back to stakeholders rational and forward to design,
artifacts, code, test cases - all 4 of one/many-to-one/many) horizontal (components across
workproducts), vertical (relationship of parts of single workproduct), impact analysis (assess impact on
rest of system)

Fundamental test processes - Test planning (work breakdown structure (WBS)), Test Control (Test
analysis and design, Test implementation and execution, Evaluating exit criteria and reporting), Test
closure activities

Test planning levels - Quality Policy/Test Policy (company level) and Test Strategy (company level,
possible at all levels)), High-level test plan and Test approach (operational) (project level), Detailed test-
plan (test stage level)

Master test plan/project test plan SPACEDIRT acronym covers all minimum 16, and 19 from IEEE
(scope (test items, glossary), people (features to be tested, features not to be tested),
approach/approvals, criteria (pass/fail, suspension/resumption), environmental needs/estimates
(schedule), deliverables, identifier/introduction, risks (software, planning)/responsibilities/references,
tasks (remaining)/training)

Test approaches Analytical (risk/requirements-based), Model-based (stochastic testing), Standard-


compliant (industry standards), Process-compliant (Agile/test-driven development), Methodical
(fault/failure/check-list/quality characteristic-based), Dynamic (exploratory)/heuristic (error guessing),
Consultative/directed (ask users), regression-averse approaches, etc.
Test control (management task, developing and applying corrective actions) responsibility test leader
(reprioritizing, change schedule, set entry criterion, review product risk, adjust testing scope), project
manager (descoping of functionality, delaying release, counting testing after delivery to production)

Quality software Customer (cost-effective), Developer (easy to develop/maintain), User (easy to use),
Development manager (profitable), Product quality vs Process quality

Quality improvement models Plan-Do-Check-Act (optimize process), Quality Improvement


Paradigm/Experiency Factory (building continuous improvement), SEI Capability Maturity Model (staged
and continuous process improvement), SPICE (Automotive/bank/healthcare)

Software Process Quality ISO 9126 Portability (Install-


ability/Conformance/Replaceability/Adaptability), Maintainability (Stability, Analyzability, Changeability,
Testability), Efficiency (Time/resource behavior), Functionality
(Suitability/Accuracy/Interoperability/Compliance/Security), Reliability (Maturity/Recoverability/Fault
tolerance), Usability (Learnability/Understandability/Operability)

ISO 25010:2011 adds Compatibility (co-existence, interoperability), security (confidentiality, integrity,


non repudiation, accountability, authenticity) and to Maintainability (reusability), Efficiency (Capacity),
Reliability (availability), Usability (User error protection, user interface aesthetics, accessibility) along
with renaming several of them

Risk analysis Prioritize risks, Apportion time, Understand the risk

Risk types Project (planning), Product (quality) expressed usually as likelihood and impact, level of
risk = probability of risk occurring x impact if it did happen (qualitative e.g. low, medium, high or
quantitative e.g. 25%)

Risk management Risk Identification (raw risk data), Risk Analysis (risk priority table), Risk
Response/contingency plans (strategies of avoidance, transference, mitigation, acceptance, individuals
to take responsibility, added to the project action list, continually monitoring and correcting)

Product risks (error prone software delivered, poor requirements, defect in software, poor software
quality, software not meeting requirements, system forces user to spend inappropriate amount of time,
system crashes, corrupt data, slow performance, incorrect documentation), Project risk (project
manager responsibility, supplier issues, organizational factors, specialist issues, documented in test plan,
risk register maintained by test leader)

Risks for testing: insufficient/not available/poor test basis, aggressive delivery, lack of test expertise,
poor test management processes, bad estimations/effort overrun/schedule delay, problems with test
environment, poor test coverage, etc

Test estimations Individual, Group, Metrics-based (data collection), Expert-based (experience of


owners, business experts, test process consultants, developers, technical architects, analysts and
designers, any application knowledgeable person), Factors (complexity/product characteristics,
development process characteristics, expected outcome of testing)

Psychology of Testing show what system should and should not do easiest test cases (faults
remain)/hardest test cases (few faults remain), reduce the perceived risk, confidence, testing paradox
(best way to build confidence is to destroy it (by testing/finding fault)), developer (prove code works,
driver by delivery), independent tester (prove code does not work, driven by quality)

Successful tester: focus on delivering quality product, results presented in non-personal way, attempt to
understand how others feel, confirm understanding after discussions, be construction, must be able to
communicate, Verbal and Written Communication/Passion/Technical Skills/Analytical
Skills/Attitude/Productivity

Levels of Independent Testing Developer, Independent testers in development team, independent


permanent test team, specialist testers e.g. usability/security/performance, outsourced test team or
testers e.g. contractors/other organizations - unbiased

Technical skills needed in testing for - Test managers, Test analyst, Test automation experts, Test
performance experts, Database administrator or experts, User interface experts, Test environment
managers, Test methodology experts, Test tool experts, Domain experts

Test job (employed to do, one or more roles), Testing roles (activity(-ies), one or more role in project)
test leader, tester

Test leader: responsible for test strategy, collaborate with project management, takes notice of
organization testing policies, coordinates testing activities with other project activities, coordinates
design, specification, implementation, execution of tests, monitors test results and exit criteria, decides
on test environment implementation, maintains plans of test progress/results, determines what should
be automated and how/how much, responsible for test support tools, decides proper metrics, writes
test summary report

Typical tester: reviewing and contributing to test plans, analyzing, reviewing and assessing user
requirements, creating test specifications from test basis, setting up test environment, preparing and
acquiring/copying/creating test data, implementing tests on all test levels, executing/logging/evaluating
tests, using test administration/monitoring tools, automating tests, run tests, review tests from others

Code of Ethics (testers may have access to confidential priviledged information) Public, Client and
Employer, Product, Judgement, Management, Profession, Colleagues, Self

Test Analysis (review test basis, evaluate testability, identify/prioritize test conditions), Test Design
(predict how Software Under Test (SUT) behaves, design prioritize test scenarios/cases, design test sets,
identify necessary test data, design test environment set-up)

Test oracle expected result, states precisely what outcome from program execution will be for a test
case if possible

Specification-based techniques (black box, based on specification of software) (generalized)


equivalence partition (equivalence classes, one test case per class, weak normal, strong normal, weak
robust, strong robust, normal with only valid values, robust including invalid values, strong is Cartesian
product from weak, one invalid value at a time not combined to avoid masking errors, sometimes better
to analyze outputs than inputs), boundary value analysis/testing (ordered set of each range of
boundaries, consider domain of ordering e.g. accuracy, valid and invalid boundary values, can do 2
values on boundaries or 3 depending on risk of application, zero is special always should test if possible,
combined with equivalence partition robustness/worst-case test (for independent variables),
guidelines to discuss requirement, set fictional boundary, research technical limitations, look for
boundaries in rest of system), decision table testing/cause and effect graphing (used for complex
business rules/logical conditions, combinations of conditions with actions that should occur, coverage
criterion, number of columns/rules, True/False, collapse the decision table by combining columns, check
for redundancy (same columns)/inconsistency (action sets different for same conditions), avoid
combinatorial explosions, also use techniques of classification trees/pairwise testing ), state transition
testing (actions triggered by change state, use state transition diagram, finite state machines, create set
of test cases so each state visited once, all events triggered at least once, all paths are executed at least
once, all transitions are exercised at least once, N-switch (Chow) testing test cases designed to execute
all valid sequences of N+1 transitions, states use letters, transitions are numbered in switch coverage
diagram), all pairs/orthogonal array testing, use case testing (exercise real processes or business
scenarios, uncover defects in process flow, integration defects, helps with designing acceptance tests,
from use case diagram with actors and use case to make use case model, testable flows basic and
alternate/exception flows)

Writing test cases: too many scenarios (use risk based testing), business case focused (model with
activity graph), technology focused (OO programming, model with sequence diagram)

Use case format Use Case Name, Scope, Level, Primary Actors, Stakeholders, Preconditions,
Postconditions, Main Success Scenario, Extensions/Exceptions, Special Requirements, Technology &
Data Variations List, Frequency of Occurrence, Miscellaneous

Structure-based techniques (white/glass box, based on structure of program/generate test cases from
code itself/pseudo-code) procedure, component, integration, system levels and statement (only
executable statements), decision (all branches are decisions but some decisions like question mark colon
?: operator or short circuit evaluation (&&,||) implicit without branch), branch (if/then/else, switch),
path coverages (exponential in number of conditional branches, presence of cycles must have limit to
prevent infinite, linearly independent paths or basis paths identified, McCabe Cyclomatic metric upper
bounds on number of independent paths V(G)=E-N+2 where G control flow graph, N number of nodes, E
number of edges) (test coverage is quantitative measure and provides estimation number of
cases/total number * 100%)

Control flow graph (nodes are basic blocks/segments (each block with sequence of statements, no
jumps from or to middle of block, once executes guaranteed to execute to end))

Experience-based techniques error guessing, exploratory testing, checklist-based testing, attack testing

Defect-based techniques taxonomy lists of root causes, defects and failures e.g. Beizer (defect), Kaner
(general), Binder (OO), Vijayaraghavan (e-Commerce)

Choosing test techniques Internal Factors (models used, tester knowledge/experience, likely defects,
test objective, documentation, life-cycle model, previous experience of defects), External Factors (level
and type of risk, customer/contractual requirements, type of system, regulatory requirements, time and
budget)

Test Implementation (create test cases, test data, expected results, test procedures, test harnesses,
automated test scripts, test suites, verify test environment), Test Execution (execute test procedures
manually or using tools, record Software Under Test with test tools and testware, Logging outcome,
reporting discrepancies, repeating test activities with re-testing, regression testing)

Test harness test execution engine and test script repository providing test environment with stubs
and drivers to execute tests.

Test suite a set of test cases.

Test log Chronological order of test execution events.

Re-testing/confirmation testing previously failed tests run again to test for pass.

Regression testing previously passed tests run again to test not failing, misnomer: more like anti-
regression or progression testing.

Automating tests is not trivial, takes 2 to 10 times longer than manual, cannot automate everything,
must plan what to automate.

Static testing (review, static analysis) code not executed. Dynamic testing code executed.

Test review stages Planning (select personnel, allocate roles, define entry/exit criteria, select parts of
document for review), Kick-Off (distributing documents, explain objectives, process and documents,
check entry criteria), Individual Preparation (reviewers perform review, defects by severity
critical/major/minor and type error/conflict/missing/extra/unclear, questions, external issues, praise),
Review Meeting (formal and informal process of providing defect list, desk(top) check/informal review
avoids formal meeting just list of defects given, checking fulfillment of exit criteria, Process: moderator
distributes copies of agenda, describes purpose of meeting, polls reviewers for time spent and general
comments, identified findings are presented, recorder notes finding, reads list of finding, team
determines outcome, author collects and marks packages with defects), Rework (correcting and
rewriting deliverable, author marks where changes made), Follow-up (process of checking if bugs
corrected, documents distributed and corrected again, for formal review moderator checks exit criteria
compliance), Final Report (by moderator, record review process, all activities including lessons learned
stored, statistical analysis addresses issue)

Review Objectives: find defects, gain understanding, generate discussion, make decision by consensus,
Steps: Study document, identify issues/problems and inform author, author updates document, Roles
and Responsibilities: Manager (decides what to be reviewed, ensures time allocation, determines
objectives met), Moderator (review leader plans review, runs meeting, does follow-up), Author (writer
or person with chief responsibility, must fix defects), Reviewer (individual with specific
technical/business knowledge), Scribe (recorder document all issues/defects/problems/open points in
meeting)

Review process types Informal review (desk check, main purpose to find defects, simple, low
overhead, only review meeting/rework), Walkthrough (step-by-step presentation by author to find
anomalies, consider alternatives, evaluate conformance to standards, gather information and establish
common understanding, detailed study not always required, usually to check scenarios and program
code, must have planning though individual preparation/follow up become optional), Technical review
(peer group discussion activity, achieving consensus on technical approach, conforms to specifications,
adheres to regulations/standards/guidelines/plans, changes are properly implemented, changes affect
only those system areas identified by change specification, only kick-off and follow-up optional, rest of
steps mandatory), Inspection (most formal peer review, examination of documents to detect defects in
specifications, specified quality attributes, regulations/standards/guidelines/plans deviations, based on
rules and checklists, users entry and exit criteria, findings with metrics are essential, significant
investment, not simple process since all steps mandatory) from low formality to high formality

Software reviews Management, technical, inspection (cost of quality, person hours highest
measurable expensive, inspect percentage of code, inspect critical portions, when done correctly is
valuable), walkthroughs, audit

Automatic static analysis syntactic, data use, control flow, interface, program slicing, path analyses

Test reporting and closure evaluating exit criteria (check test logs), reporting (test summary report for
stakeholders, communicate findings, analysis, assessment of defects, economic benefit of testing,
outstanding risks, level of confidence), test closure activities (check which deliverables are deliverable,
close incident reports, finalize and archive testware, handover testware to maintenance, analyze lessons
learned)

Test documentation Test Plan, Test Design Specification, Test Case Specification, Test Procedure
Specification, Test Item Transmittal Report, Test Log, Test Incident Report, Test Summary Report

Test Plan document Plan Identifier, Test Items, Risk Issues, Features to be Tested, Features not to be
Tested, Test Approach, Pass/Fail Criteria, Suspension Criteria, Test Deliverables, Environmental
Requirements, Staffing/Training Needs, Schedule of Test, Planning for Risks, Approvals

Test Design Specification document Test design specification identifier, Features to be tested,
Approach refinements, Test identification, Features pass/fail criteria

Test Case Specification template Test Case specification identifier, Test items (features and
conditions), Input specifications, Output specifications, Environmental needs, Special procedural
requirements, Inter-case dependencies

Test Procedure Specification Test procedure specification identifier, Objective/purpose, Special


requirements, Procedure steps: Log, Set-up, Start, Proceed, Measure, Shutdown, Restart, Stop, Wrap-
up, Contingency

Test Summary Report Specification template Test summary report identifier, Summary, Variances,
Comprehensive assessment, Summary of results, Evaluation, Summary of activities, Approvals

Test monitoring checking test status metrics including: test execution (number of cases
pass/fail/blocked/on hold), defect, requirement traceability, test coverage, miscellaneous (tester
confidence, dates, milestones, cost, schedule, turnaround time)

Test control based on test monitoring - prioritizing, revisiting, reorganizing, reprioritizing

Configuration management change (for military and government change control board (CCB)), version
(control, traceability, repository/check-in/check-out/workspaces/branches centralized e.g. SVN and
CVS or distributed e.g. GIT and Mercurial version control system (VCS)), build (system) (development
system, build server, target environment), release management (deployment 4-tier: Development,
Testing, Staging, Production)
Configuration management terms Software configuration item (SCI), Configuration control, Version,
Baseline, Codeline, Mainline, Release, Workspace, Branching, Merging, System building

Incident/defect management process of recognizing, investigating and taking action on incidents


(identification/classification/investigation and analysis/resolution and recovery/closure), incident
recording (report identifier, summary, description, impact, investigation details, metrics, status and
history, comments, recommendations, conclusions), other fields (Title (categories: Missing, Inaccurate,
Incomplete, Inconsistent, Incorrect), Test environment, Reproduction Steps with Inputs, Actual Results,
Expected Results, Defect severity (blocking/critical/high/medium/low or other scale
critical/major/moderate/minor/cosmetic), defects prioritization (immediate, next release, on occasion,
open (not planned for now)), quadrants low vs high, non critical vs critical, Agile (1-5 low, 6-12
moderate, 15-20 serious, 25 critical), defect management process (issue, new then valid (to disputed),
work in progress, fixed, closed, back to issue) with steps recognition, investigation, action, disposition
and statuses (new, open, re-open wait, reject, fixed, included to build, verified, closed, re-test)

Test management related tools test management tool, requirement management tool, incident
management tool, configuration management tool

Static testing tools Review process support tool, Static analysis tool, Modelling tool

Test specification tools Test design tool (computer aided software engineering (CASE)), Test data
preparation tool, simulators(behavior)/emulators (inner workings)

Test execution related tools Test execution tool, Test harness/Unit test frameworks, Test comparator,
Coverage measurement tool, Security testing tool

Performance and test monitoring tools Dynamic analysis, Performance testing, Load testing, Stress
testing, monitoring tools

Other testing tools spreadsheet, word processor, E-mail, Back-up and restore utilities, SQL, Project
planning tool, Debugging tool, DevOps tools

You might also like