Professional Documents
Culture Documents
Version 2.5
EDS EDS specific information available used where an EDS exceptions exist.
If this appears top right, it means animation applies to the slide. It will also appear
bottom right after the final animation mouse click. For Tutors use only.
Fundamentals of testing
Testing Throughout the Software Lifecycle
Static Techniques
Test Design Techniques
Test Management
Tool Support for Testing
Fundamentals of Testing
Areas Covered
Can manifest as
A flaw in a component or
Defect system that can cause the
component or system to fail to
perform its required function
May result in
Hetzel 1998
Test Objectives
There are different test objectives:
To find defects
To gain confidence about the level of quality and to provide information
To prevent defects
Both dynamic testing and static testing can be used as a means for
achieving these objectives
They provide information in order to improve:
The system to be tested
The development and testing processes
Live operations (e.g. how long it takes for a process to run)
Test Objectives
By designing tests early in the project life cycle it can help to prevent
defects from being introduced into code
Reviews of documents throughout the lifecycle (e.g. requirements and
design) also help to prevent defects appearing in the code. More
about this when we cover Static techniques
Relative
Multiples
Test Objectives
The Objectives of testing can vary depending on the stage of testing
being conducted. E.g.:
Test Objectives
Testing v Debugging
Debugging and testing are different:
Testing can show failures that are caused by
defects
Debugging identifies the cause of a defect,
repairs the code and checks that the defect
has been fixed correctly
Testing then ensures that the fix does
indeed resolve the failure
The responsibility for each activity is very
different, i.e.
Testers test
Developers debug
Myers 2004
EDS ISTQB Testing Foundation Course Version 2.5
Slide 36 EDS
General Testing Principles
If we build a system and, in doing so, find and fix defects ....
It doesnt make it a good system
Even after defects have been resolved it may still be unusable and/or
does not fulfil the users needs and expectations
Myers - 2004
Ref: Myers, The Art of Software Testing, J Wiley and Sons, 1979
EDS ISTQB Testing Foundation Course Version 2.5
Slide 54 EDS
Fundamental Test Process
Myers - 1979
Independent testing
The right mindset could enable Developers to test the code
However, passing this responsibility to trained and professional
testing resources has many benefits (such as higher defect find
rates)
Authors tend to bring across assumptions they have made when
developing the software. They are less likely to write tests to show
faults in their own software (human nature)
With testing performed by independent testers, testing effort is
focused and not compromised by development effort and bias
It is generally believed that objective independent testing is more
effective
Independent testing
There are several levels of Independence (from Low to High)
Tests designed by the person(s) who wrote the software under
test
Tests designed by another person(s) (e.g. from the
development team).
Tests designed by a person(s) from a different organizational
group (e.g. an independent test team).
Tests designed by a person(s) from a different organization or
company (e.g. outsourcing to an in-house or external test
specialist organisation)
We learned why you cant test everything and when to stop testing
through Risk Analysis, Prioritisation and use of Exit Criteria
its main Objectives, i.e. to find and prevent defects and to gain confidence in
system Quality
How we meet these Objectives
How they can vary by Test Level
That it is conducted before and after the code is delivered
What activities it comprises
That Debugging and Testing are different
The V-Model
Integration Integration
Technical
Test Plan
Test
Specification
Coding
The levels of development and testing shown in the model vary from
project to project
For example, there may additional test levels, such as System
Integration Testing, sitting between System Testing and Acceptance
Testing (more on these test levels later)
The work products coming out from any one development level may
be utilised in one or more test levels
For example, whilst the prime source for Acceptance testing is the
Business Requirement, the System Requirements (e.g. Use Cases)
may also be needed to support detailed test design
Watkins - 2001
Iterative Development
Establish Requirements
Design the System
Build the System
Test the System
As Increments are developed and tested the System grows and grows. Need for more testing with Regression Testing paramount
Agile development
aim is to deliver software early and often
Rapid production and time to market
Can handle (and anticipates) changing requirements throughout all development and test phases
User
Requirements
Code
Acceptance
Test
Test Levels should be adapted depending on Project nature. May be better to combine Test Levels, e.g. with COTS testing.
Component Testing
Integration Testing
System testing
Acceptance Testing
Integration Integration
Technical
Test Plan
Test
Specification
Coding
Definition
Definition
Definition
Definition
Integration Integration
Technical
Test Plan
Test
Specification
Coding
Definition
Test Planning
To consider - should the integration testing approach:
Top-down testing
Component
under test
P
Q R
S T U V
Stubs
Top-down testing
Pros Cons
provides a limited working system stubs only provide limited
early in the design process simulations of lower level
depth first integration demonstrates components and could influence
end-to-end functions early in the spurious results
development process breadth first means that higher
early detection of design errors levels of the system must be
through early implementation of the artificially forced to generate
design structure output for test observations
early testing of major control or
decision points
Bottom-up testing
Component under test
P
P is the driver
for components
Q and R
Same for Q
and R
Q R driving their
components
S T U V
EDS ISTQB Testing Foundation Course Version 2.5
Slide 89 EDS
Component Integration Testing
Bottom-up testing
Pros Cons
using drivers instead of upper unavailability of a demonstrable
level modules to simulate the system until late in the
environment for lower level development process
modules
late detection of system
necessary for critical, low level structure errors
system components
testing can be observed on the
components under test from an
early stage
Context
Definition
Integration Integration
Technical
Test Plan
Test
Specification
Coding
Definition
Definition
Myers - 2004
Definition
Non-functional requirements
Definition
Non-functional requirements
Non-functional requirements
The non-functional aspects of a system are all the attributes other than
business functionality, and are as important as the functional aspects.
These include:
the look and feel and ease of use of the system
how quickly the system performs
how much the system can do for the user
It is also about:
how easy and quick the system is to install
how robust it is
how quickly the system can recover from a crash
Context
Definition
Objectives
Integration Integration
Technical
Test Plan
Test
Specification
Coding
Definition
Context
Definition
Integration Integration
Technical
Test Plan
Test
Specification
Coding
Definition
Acceptance testing: Formal testing with respect to user
needs, requirements, and business processes conducted to
determine whether or not a system satisfies the acceptance
criteria and to enable the user, customers or other
authorized entity to determine whether or not to accept the
system.
Definition
Usually the responsibility of the Customer/End user, though
other stakeholders may be involved. Customer may sub-
contract the Acceptance test to a third party
Goal is to establish confidence in the system/part-system
or specific non-functional characteristics (e.g. performance)
Usually for ensuring the system is ready for deployment into
production
May also occur at other stages, e.g.
Acceptance testing of a COTS product before System Testing
commences
Acceptance testing a components usability during Component
testing
Acceptance testing a new significant functional
enhancement/middleware release prior to deployment into
System Test environment.
The objective of OAT is to confirm that the Application Under Test (AUT)
meets its operational requirements, and to provide confidence that the
system works correctly and is usable before it is formally "handed over" to
the operation user. OAT is conducted by one or more Operations
Representatives with the assistance of the Test Team 1 Watkins 2001
Both address acceptance testing for systems that are tested before and
after being moved to a customers site
Definitions
Functional Testing
Non-Functional Testing
Structural Testing
Confirmation & Regression Testing
Definitions
Definitions
May be performed at all Test levels (not just Non Functional Systems
Testing)
Re-testing: Testing that runs test cases that failed the last time
they were run, in order to verify the success of corrective
actions
You also need to ensure that the modifications have not caused
unintended side-effects elsewhere and that the modified system still
meets its requirements Regression Testing
Static Techniques
Topics
Why Review
When and What to review
What do Reviews find
Benefits of reviews
Relative
Multiples
Requirement
Errors
Design
Errors
Coding
Errors
(Faults)
correct
correct
correct
Typical defects that are easier to find in reviews than in dynamic testing
are:
deviations from standards
requirement defects
design defects
insufficient maintainability
incorrect interface specifications.
Topics
Informal Reviews
Formal Review Types
Formal Review Process
Formal Review Roles and Responsibilities
Other Review Types
Key Success factors
Planning
Kick-off
Review Overview optional
Preparation
Review Meeting
Rework
Follow-up
Repeat Review - optional
The author must resolve all defects found during the review by reworking
the material as recommended by the review report
Note, the cost of rework is NOT included in the cost of reviews
Check the corrections to the material and account for all recorded defects
If necessary, schedule a repeat review for the corrected material
Inform management of the status of the corrected material
Add the defect data from the review to the project statistics database
enables process improvement!
Complete and sign the review report and forms (Inspections)
Ensure exit criteria met
Topics
Definition
Description
Value
Types of Defects found
Use of Tools
Myers - 1979
Test Test
Procedure Execution
Sourced Documentation
Specification Schedule
Manual Test
Test Script Test
Test Test Procedure
Cases
Condition Cases or Specifications
Automated
Priority Test Script
Input Output
Input Output
Equivalence Partitioning
Boundary Value Analysis
Decision Table Testing
State Transition Testing
Use Case Testing
Other black box test techniques
Equivalence Partitioning
Equivalence Partitioning
0 101
37 65 99
1 100
-1 19 53
48 87 1000
OUT OF OUT OF
RANGE IN RANGE RANGE
Equivalence Partitioning
the numbers fall into a partition where each would have the same, or
equivalent, result i.e. an Equivalence Partition (EP) or Equivalence Class
EP says that by testing just one value we have tested the partition
(typically a mid-point value is used). It assumes that:
Equivalence Partitioning
in EP we must identify Valid Equivalence partitions and Invalid
Equivalence partitions where applicable (typically in range tests)
the Valid partition is bounded by the values 1 and 100
plus there are 2 Invalid partitions
Equivalence Partitioning
IF Value >= 1 AND Value <= 100 THEN .
0 101
37 65 99
1 100
-1 19 53
48 87 1000
VALID PARTITION
INVALID INVALID
PARTITION
EDS PARTITION
ISTQB Testing Foundation Course
Slide 207 EDS
Version 2.5
Black Box Test Techniques
Equivalence Partitioning
Time would be wasted by specifying test cases that covered a range of
values within each of the three partitions, unless the code was designed in
an unusual way
There are more effective techniques that can be used to find bugs in such
circumstances (such as code inspection)
EP can help reduce the number of tests from a list of all possible inputs to a
minimum set that would still test each partition
Equivalence Partitioning
If the tester chooses the right partitions, the testing will be accurate and
efficient
If the tester mistakenly thinks of two partitions as equivalent and they are
not, a test situation will be missed
Or on the other hand, if the tester thinks two objects are different and they
are not, the tests will be redundant
BVA operates on the basis that experience shows us that errors are most
likely to exist at the boundaries between partitions and in doing so
incorporates a degree of negative testing into the test design
BVA Test cases are designed to exercise the software on and at either side
of boundary values
find the boundary and then test one value above and below it
ALWAYS results in two test cases per boundary for valid inputs and
three tests cases per boundary for all inputs
inputs should be in the smallest significant values for the boundary
(e.g. Boundary of a > 10.0 should result in test values of 10.0,
10.1 & 10.2)
only applicable for numeric (and date) fields
Output / Response 1 Y Y N
Response Response 2 Y N Y
Response 3 N Y N
Each column of the table corresponds to a business rule that defines a unique
combination of conditions that result in the execution of the actions associated
with that rule
Kevin is 62 year old non smoker who swims twice a week and plays tennis. He
has no history of heart attacks in his family
Transition Between
Event/Action etc start and end states
End State
1st Gear
Change Up/ Change Down/
Accelerate Decelerate
2nd Gear
Change Up/ Change Down/
Accelerate Decelerate
3rd Gear
Show
Show Options
Show selected Reservation
provided
Made
Pay for
Change Show
Mind/
Cancel
Return to
reservation
Options
Show
Reservation
Paid For
Cancelled Issue
Cancel reservation/
Reservation Issue Refund Ticket
Ticket
Cancel reservation
(return ticket)/Issue Received
Refund
Current Event/ Event/ Next Event/ Next Event/ Next Event/ Next Event/ Next
State Next State State State State State State
A B C D E F
SS 1
1 2
2 1 3
3 4
4 ES
ES
Syntax testing
test cases are prepared to exercise the rule governing the format of
data in a system (e.g. a Zip or Postal Code, a telephone number)
Random testing
Statement Testing
Decision Testing
Assessing Completeness (Coverage)
Other White Box test techniques
Example 2 2
1. Read A 3
2. If A > 40 Then
4
3. A= A* 2
5
4. End If
5. If A > 100 Then 6
6. A = A 10
7
7. End If
Example 1 2
1. Read vehicle 3
2. Read colour 4
3. If vehicle = Car Then
5
4. If colour = Red Then
5. Print Fast 6
6. End If
7
7. End If
EDS ISTQB Testing Foundation Course
Slide 231 EDS
Version 2.5
White Box Test Techniques
Decision Testing 1
Example 2 2
1. Read A
3
2. If A > 40 Then
4
3. A= A* 2
4. End If 5
5. If A > 100 Then
6
6. A = A 10
7. End If 7
1. Read A
2. If A > 40 Then
3. A= A* 2
4. End If
5. If A > 100 Then
6. A = A 10
7. End If
7. Price = 1.00
8. End If
9. Else
10. Price = 0.75
11. End If
EDS ISTQB Testing Foundation Course
Slide 235 EDS
Version 2.5
White Box Test Techniques
Assessing Completeness (Coverage)
1. Read bread
2. Read filling Based on the following test set:
Roll Tuna
4. If filling = Tuna Then
Sandwich Ham
5. Price = 1.50
6. Else What is the test Statement Coverage
and Decision Coverage?
7. Price = 1.00
8. End If
9. Else
10. Price = 0.75
11. End If
EDS ISTQB Testing Foundation Course
Slide 236 EDS
Version 2.5
White Box Test Techniques
Definition
Error Guessing
Exploratory Testing
Exploratory
Risk Analysis Charter Debriefing
Sessions
Notes
We learned about
Identifying Test Conditions
Designing Test Cases from the Test Conditions
Creating Test Procedure Specifications to sequence our Test Cases
Creating Test Execution schedules to define the order in which the test scripts
are executed, when they are to be carried out and by whom
The importance of traceability to requirements and specification of expected
results
We learned about the difference between Black and White Box testing
White Box (Structure-based) Testing is based upon the structure of the program
code
Black Box (Specification Based) Testing is without reference the internal working
of the program code
The reasons why both are useful
Statement Testing
Decision Testing
For all Black and White box techniques we learned why they are of use
and for which test levels they are typically applied
Test Management
Test Organisation
Test Planning and Estimation
Test Progress Monitoring and Control
Configuration Management
Risk and Testing
Incident Management
Summary
Test Independence
Testing Roles Within the Team
The Test Leader
The Tester
Levels of Independence
More effective for someone other than the developer to test the system
More impartial
No preconceived ideas about what requires testing
No bias or emotional attachment
Acceptance
Testing
System Independent
Integration
Testing Testers
System Testing
Developers
Component Testing
Benefits
Independent testers see other and different defects, and are unbiased
An independent tester can verify assumptions people made during
specification and implementation of the system
Usually a Cost saving
Better skills = more effective testing and fewer defects getting into production
For Third Party test outsourcing, better to rent than to own
Drawbacks
Isolation from the development team (if treated as totally independent).
Independent testers may be the bottleneck as the last checkpoint
Developers lose a sense of responsibility for quality
Can be a greater cost need to consider viability
For Third Party test outsourcing, the project carries the risk
Test Planning
Test Planning Activities
Exit Criteria
Test Estimation
Test Approaches
All projects require a set of plans and strategies which define how the
testing will be conducted.
There are number of levels at which these are defined:
Approach - Defining the overall approach of testing (the test strategy), including the
definition of the test levels and entry and exit criteria.
Integrating and coordinating the testing activities into the software life cycle
activities: acquisition, supply, development, operation and maintenance.
Making decisions about:
what to test
who i.e. what roles will perform the test activities
when and how the test activities should be done and when they should be stopped (exit
criteria see next slides)
how the test results will be evaluated
Assigning resources for the different tasks defined.
Testware definition- Defining the amount, level of detail, structure and templates
for the test documentation.
Selecting metrics for monitoring and controlling test preparation and execution,
defect resolution and risk issues.
Process - Setting the level of detail for test procedures in order to provide enough
information to support reproducible test preparation and execution.
The business
Run out of tells you it went
time? live last night!
Boss says
Run out of stop?
budget?
Black - 2002
One method of classifying the way testing is done is by looking at when the
bulk of testing is carried out
Preventative Reactive
Approaches Combined
Need to know the status of the testing project at any given point in time
Need to provide visibility on the status of testing to other stake holders
Need to be able to measure your testing against your defined exit criteria
Need to be able to assess progress against
Planned schedule
Measure how you are tracking against your defined budget
Metrics should be collected during and at the end of a test level in order to
assess:
The adequacy of the test objectives for that test level.
The adequacy of the test approaches taken.
The effectiveness of the testing with respect to its objectives.
However
What are the risks of the parts that
have failed
Does this chart account for all the
testing scheduled is there more to
come?
Time
Symptoms of Poor CM
Configuration Items and their control
Project Risk: A risk related to management and control of the (test) project.
Supplier Issues
Contractual Issues
Third party goes in liquidation or fails to deliver
Organisational Issues
Skills and Staff shortages
Training and support issues
Communication/Political Issues, e.g. between testers and other project teams
Technical
No or poor requirements
Quality of the design or code
Architectural solution under question
Risks can help decide where we should start testing or where we may need to
more testing
Risks also help us analyse our current state and through test monitoring we can
determine if a system is ready to be implemented
Risks can also drive the number of test levels and determine the techniques to use
Weymouth
2006
Testing can support the identification of new risks identified during test
planning
Testing is used to reduce the risk of an adverse effect occurring, or to
reduce its impact
Testing provides feedback about the residual risk- through measuring the
effectiveness of critical defect removal and contingency plans
In analysing, recording and managing the Product and Project risks the Test
Manager is following well defined project management principles
In a risk-based approach we can not only determine our test prioritisation
but also the test techniques to use
Likelihood
Low
Low Impact High
C A
Wont Test Should Test
Informal Test specification Formal Test Specification
Error Guessing Statement Coverage
100%
D B
Impact Impact
Definition
Basic principles
Benefits
Attributes of an Incident
Tracking and Analysis
Writing Good Incident Reports
=
expected actual!
EDS ISTQB Testing Foundation Course
Slide 298 EDS
Version 2.5
Basic Principals
Incidents:
Provide developers and other parties with feedback about the problem to enable
identification, isolation and correction as necessary
Provide test leaders a means of tracking the quality of the system under test and
the progress of the testing
Provide ideas for test process improvement
Summary
one or two lines which provide an overview of the incident and its severity and
impact. Be sharp and to the point.
Steps to reproduce
provide the exact steps that were undertaken to create the incident. Be as
concise as possible, but make sure you add EVERY step
get these steps right makes it easier for the developers to reproduce. it
reduces the it works on my machine phenomenon
Isolation
is the incident repeatable, what particular factors effect the ability to reproduce
the incident, For example I observed the error on the following platforms IE5,
NS7
Bad Example
Summary
There were a number or errors on the add customer screen
Steps to reproduce
1. Opened the add customer screen
2. Entered a new customer
3. Pressed add
4. Got error message
Isolation
I tried on a few different branches and it worked on most of them
Good Example
Summary
Error cannot find object (see attached screen shot) message was displayed when
trying to add a new customer to the system using screen ADD_CUST.
Steps to reproduce
1.Opened the add customer screen using the menu
2.Entered a new customer (details are attached in spreadsheet)
3.Selected customer as corporate
4.Added to branch Littleton
5.Pressed add
6.Got error message cannot find object
Isolation
I tried on a few different branches and the system worked without issue when the
branch was classified as open. The Incident only occurred when the branch was
classified as closed
Test Teams should review requirements early in the lifecycle to check for
consistency, testability and to allow test cases to be constructed
Summary (1)
Firstly, we looked at types of test tool and how they are classified
Summary (2)
Test management Requirements management
Test management tools
Incident management Configuration management
Static analysis Review process support
Static testing tools
Modelling
Summary (3)
Then we looked at the effective use of tools:
Benefits of tools:
Save time, thorough testing, reduce repetitive tasks, etc.
And the risks:
Underestimating time and effort, Over-reliance, etc.
Special considerations for:
Test execution tools, performance tools, static analysis tools and
test management tools
And finally, we looked at how to introduce a tool:
Main principles (evaluation, assessments, etc.)
Pilot project objectives
Success factors
EDS is a registered mark and the EDS logo is a trademark of Electronic Data Systems
Corporation.
EDS is an equal opportunity employer and values the diversity of its people.
Copyright 2006 Electronic Data Systems Corporation. All rights reserved.
Presentation and Course owner Paul Weymouth, UKIA Testing ADU Paul.Weymouth@eds.com
Presentation and Course contributors:
Paul Weymouth, Testing Architect UKIA Testing ADU
Mark Otter, Senior Test Consultant Australia South ADU
Dave Broughton, Testing Architect UKIA Testing ADU
EDS and the EDS logo are registered trademarks of Electronic Data Systems Corporation. EDS is an equal opportunity employer
and values the diversity of its people. 2005 Electronic Data Systems Corporation. All rights reserved.