Professional Documents
Culture Documents
TEST MANAGEMENT
Contents
The Test Management - Basic Principles The Test Process a. Set Test Objectives b. Develop Test Plan c. Execute Tests d. Summarize and Report Results Test Estimation & Forecasting Test Tracking- Use of S curves Defect Tracking People Issues in software testing
Test Plan
Test Approach
Test Case
Test Manager
Roles
Test Manager
Responsible for
Activitie s
Prepare test plan
Artifacts
Test Plan
CSTM Ver 4.0 Module # 5 Test Management 6
Step 1 - Set Test Objectives Define Test Objectives and Test Strategy
A test objective is what the test is to validate. eg Performance, functionality etc Test Strategy is an approach depending on size, criticality etc
How Critical is the system to the Organization? What are the Tradeoffs?
What Type of Software? What Type of Technical Environment? What is the Projects Scope?
Who Will Conduct Testing? What are the Critical Success Factors?
CSTM Ver 4.0
10
Risk Analysis
Risk based test strategy will be fruitful if all risks are identified at an early stage. Risk Exposure = Probability of failure * Impact
11
Risk Structure
RISK
Impact ( Cost of Failure) Probability of failure
Usage Frequency
Consequence
13
Risk Assessment
14
1. Test for major features or functionalities that would have major set-back on customer, when it is not properly tested. 2. Test for major features or functionalities , the failure of which would result in losing credibility and market share, or missing a market window or in incurring failure costs 3. Record any features that are not tested and the reasons for not testing.
Module # 5 Test Management 15
16
17
18
Informal
Start with the Classic Quality risk categories Functionality, states and Transactions, capacity and volume, data quality, error handling and recovery, performance, standards and localization, usability, etc. Set priority for testing each quality risk with key stakeholders OR Apply Risk Lists
CSTM Ver 4.0
ISO 9126 Start with six main quality Characteristics Functionality, Reliability, Usability, Efficiency, Maintainability, Portability (FRUEMP), then decompose into key sub characteristics for your system Set priority for testing each sub-characteristic with key stakeholders
FMEA Start with categories, characteristics, or subsystems Key stakeholders list possible failure modes, predict their effects on system, user, society, etc., assign severity, priority, and likelihood, then calculate risk priority number (RPN) Stakeholders use RPN to guide appropriate depth and breadth for testing
19
20
Potential cause of failure - Possible factors that may trigger the failure like operating system, operator error or normal use Detection Methods Existing methods or procedure that can detect the problems Reviews, customer testing etc
22
Severity
Scale Description IEEE 1044 class Urgent High Class Number
1 2 3 4 5
CSTM Ver 4.0
Loss of Data, hardware damage or safety issue Loss of functionality without a workaround Loss of functionality with a work around Partial loss of functionality Cosmetic or trivial
IM110 IM120
Priority
Scale Description
1 2 3 4 5
Complete loss of system value Unacceptable loss of system value Possibly reduction in system value Acceptable reduction in system value Negligible reduction in system value
IEEE Class Number 1044 class Priceless IM310 High Medium Low None IM320 IM330 IM340 IM350
24
Likelihood
Scale Description IEEE 1044 class Urgent High
Class Number
1 2 3 4 5
CSTM Ver 4.0
Certain too affect all users Likely to impact some users Possible impact on some users Limited impact to few users Unimaginable in actual usage
IM210 IM220
FMEA
Feature GIS Failure Mode Functio n limits Effect Critical Sev erity 2 Potential Cause Memory size Protocol impleme ntation Priority Detection Method 3 Customer trials FAT Likeli hood 2 RPN 12
26
FMEA
Test Type Load Failure Mode Scaling of system to peak concurrent users Scaling of system to peak concurrent users Effect System fails at 20 concurrent users System fails at 200 concurrent users Severity 1 1
Priority
Likeliho od 3
RPN 3
Load
Volume
Scaling of System fails system to to handle handle large documents volumes of data larger than 100kB
28
29
Most important 15% of test cases find over 75% of the defects.
30
31
Developing a Work-Breakdown-Structure The major stages of a testing subproject 1. Planning 2. Staffing (if applicable) 3. Test environment acquisition and configuration 4. Test development 5. Test execution Break down to discrete tasks within each stage Tasks should be short in duration (e.g., a few days) Assign Dependencies Analyze Critical Paths Estimate time taken for each task
Module # 5 Test Management 33
34
Formal
Test Point Analysis Use Case Point Analysis
35
Use expected-case Variances between best and Worst useful to gauge estimate accuracy
36
2. Uses ISO 9126 quality characteristics analysis 3. Test points are calculated based on.. Size (function points adjusted for complexity, interfaces, and uniformity) Test strategy (which quality characteristics or risks are to be tested and to what extent) Productivity (skills of the test team, influenced by the project, process, Technology, and organization)
37
39
40
41
43
44
Static test points (Qs=SumQi) 16 per statically tested quality characteristic. Total test points Total for all statically tested quality TP=Sum(TPf)+(FPQs)/500 characteristics, 0-96. Function point count for entire system,minimum 500.
CSTM Ver 4.0 Module # 5 Test Management 45
48
Test ware, 1, 2, or 4
Test environment, 1, 2 or 4. Development environment, 3, 4, or 8. Test basis, 3, 6, or 12 Development testing 2, 4, or 8. Test tools, 1, 2, or 4.
Total test hours Management overhead factor Breakdownn: MO=PT*(Ts+Mt)/100 10%preparation (plan, design) 40% specification Management (test dev) 45% execution tools, 2,4, or 8 (run, report) 5% completion (final report) Test team size, 3, 6 or 12.
Module # 5 Test Management 50
51
52
Test Plan
54
33%
Development Testing
67%
55
5%
10%
Planning Specification Execution Completion
45%
40%
56
57
2. 3. 4. 5. 6. 7. 8.
CSTM Ver 4.0
Test Plan Identifier Introduction Test items Features to be tested Features not to be tested Approach Item pass / fail criteria Suspension criteria and resumption requirements
Module # 5 Test Management 58
9. Test deliverables 10.Testing tasks 11.Environmental needs. 12.Responsibilities 13.Staffing and training needs 14.Schedule 15.Risks & Contingencies 16.Approvals
CSTM Ver 4.0 Module # 5 Test Management 59
60
Introduction
61
Test Items
Identify the items to be tested with version. Supply reference to the test item documents like requirements specification, design specifications, user guide, operators guide, installation guide
CSTM Ver 4.0 Module # 5 Test Management 62
Features to be tested
Identify all the software features to be tested. Identify test design specification associated with each feature.
63
Approach
Describe overall approach to testing that will ensure the adequacy of testing. Approach should be designed in details to identify the major testing tasks and estimation of time requirements. Comprehensiveness Constraints
64
Specify the testing activities that must be repeated when testing is resumed.
Module # 5 Test Management 65
Test Deliverables
Identify the deliverable documents which include
CSTM Ver 4.0
test plan test design specifications test case specifications test procedure specifications test logs test incident reports test summary reports
Module # 5 Test Management 66
Testing Tasks
Identify the set of tasks necessary to prepare for and perform testing Identify all the inter-task dependencies and any skill requirements.
67
Environmental Needs
Specify the facilities required for testing in terms of hardware, software, communication lines. Identify any special tools required.
68
Test environments are the facilities built up around the system under test to assist in the testing process.The test environment includes: the system or components under test test documentation, describing the tests to be performed and recording results test harnesses and test stubs, used to replace parts of the system that are not yet available or cannot be used to control the system for the necessary tests test oracles, used to predict or check outputs of test cases test data, including data files that are used as inputs in test cases Module # 5 Test Management 69
Test Environments
Responsibilities
Identify the groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving. The groups may include developers, testers, user representatives etc.
70
Specify test staffing needs and skill set. Identify training options for providing necessary skills.
71
Schedule
Include test milestones. Estimate time required to do each testing. Specify schedule for each testing task and test milestones. Specify the testing resource usage periods.
72
74
Understanding Dependencies
Assign Dependencies 1. Identify tasks with no predecessors 2. Identify tasks dependent only on previously dependent tasks 3. Repeat step 2 until all dependencies have been identified
Analyze Critical paths A set of dependent tasks where delay in any task delays the project 1. What affects phase entry and exit criteria 2. During test execution how many test passes, releases and cycles are there? 3. External dependencies??
CSTM Ver 4.0 Module # 5 Test Management 75
76
Approvals
Specify the names and titles of all persons who must approve this plan.
77
78
Data Modeling Boundary Value Analysis Capture/Playback Cause-Effect Graphing Change Control Trackers Checklists Checkpoint Review Code Comparison Compiler-based Analysis Complexity-based Analysis
Compliance Checkers Control Flow Analysis Correctness Proofs Coverage Based Analysis Data Dictionary Decision Tables Defect Trackers Desk Checking Equivalence Partitioning Error Guessing
79
Types of Testing
Static and Dynamic Testing Functional (Black-Box) Testing Structural (White-Box)Testing Module (Unit) Testing Integration Testing Interface Testing System Testing Acceptance Testing Performance, Load and Stress Testing Usability Testing Reliability Testing Security Testing Recovery Testing Installation Testing Configuration Testing Compatibility/Conversion Testing Serviceability Testing Alpha and Beta Testing Regression Testing Documentation Testing Mutation Testing
80
N = [Z2P*(1-P)]/E**2
Z is value based on confidence level (65%, 95%, 99%) representing 1W, 2W, 3W. E margin of error allowable.
% fail E
81
82
Task 1 - Record Defects Task 2 - Perform Data Reduction Task 3 - Develop Findings and Recommendations Task 4 - Finalize Test Report Task 5 - Test Report Quality Control
Test Defect Report Test Defect List Test Defect Log
CSTM Ver 4.0 Module # 5 Test Management 83
84
Test Tracking
S Curve Plan, attempted, actual X axis representing time and y axis representing number of test cases or test points Data is cumulative having a steep planned test ramp up Planned progress over time in terms of number of test cases to be completed successfully week by week Number of test cases attempted by week Number of test cases completed successfully by week
CSTM Ver 4.0 Module # 5 Test Management 85
T C s / F a i l u r e s
140
120
100
80
60
40
20
Days
CSTM Ver 4.0 Module # 5 Test Management 86
217 388 90
87
88
90
Defect States
Defects can be in different states as follows: Review when a tester enters the defect for first time it is kept in the dB for review Reject when reviewer rejects the defect report for want of details it is sent back to tester Open- If tester has fully characterized the defect the reviewer opens the report making it visible to the world Assigned- The appropriate project team member is assigned the defect for correction
93
Defect Tracking
Test- Once development provides a fix it enters the test state Reopened- If fix fails the confirmation testing the tester reopens the defect report and if it fails the regression test the tester opens a new defect report. Closed- If the fix passes the confirmation test the tester closes the defect report Deferred- If the assigned team member decides that the problem is real but chooses to defer the fix then it is in the deferred state
94
Review
Open
assigned
Test
Closed
95
96
97
98
99
Product Metrics: Reliability MTBF= T1/A or T2/A where A - number of actually detected failure occurrences and T1operation time and T2- sum of intervals between successive failure occurrences. Test Coverage=A/B where A- Number of actually performed test cases representing operation scenario and B- Number of test cases to be performed.
CSTM Ver 4.0 Module # 5 Test Management 101
Product Metrics: Time behavior Response Time =T (time of gaining the resulttime of command entry finish) Average Response time Worst Case Response Time Throughput = A/T where A- Number of completed tasks and T observation time period Average Throughput
CSTM Ver 4.0 Module # 5 Test Management 102
Product Metrics :coverage Statement Coverage Decision Coverage Condition Coverage Multiple condition Coverage
Such coverage metrics are specified for critical applications both in terms of what metric is to be calculated and their level of compliance.
103
Process Metric
Defect Removal Effectiveness= Defects removed in a Development Phase/Defects latent in the product eg Number of defects detected in System Test (20) Defects found after 6 months in operation(5) DRE=20/25*100= 80%
104
Progress Metrics
Effort Variance %= (actual effort-base lined estimate) /base lined estimate*100 Anomaly find rate Plot of defects found say weekly from beginning of testing to release (generally a bell shaped curve) Anomaly fix rate Anomalies should be fixed as soon as they arrive. The anomaly fix rate can be plotted by week
CSTM Ver 4.0 Module # 5 Test Management 105
106
107
Model 1
Difficult to preserve notion of independence Difficult to get test resources Development manager has overall responsibility and forced to choose between develops & testers
110
Model 2
Project manager Test manager Development manager
Test Manager still not independent Project Manager is less involved in in creation of tests but his interests are still with development Development and test organizations have different budget heads and eases the problem of resources to an extent Still expect to participate in all panics and rushed schedules
Module # 5 Test Management 111
Model 3
Executive management Test manager Project Manager Project Manager
Test Team is truly independent Management heeds test status reports with an open mind Budget and staffing problems are minimized Still pressures of release will remain but are controllable
CSTM Ver 4.0 Module # 5 Test Management 112
People have different learning styles as per FA model: Sensory/intuitive- those who derive information from external senses while the latter derives it from intuition Sensory Testers pay attention to detail prefer problems with well defined standard solutions Sensory Testers are generally good at regression testing The intuitive tester is good at abstraction, and often innovative and imaginative Intuitive learners are bored by repetition They are to be preferred for exploratory testing or trying new techniques
113
Visual/Verbal
Visual testers have a strong preference for the visual and base their tests on UML diagrams, Sequence diagrams and other visual forms. The application generally runs like a movie in their heads and they are therefore good at scenario based tests and test automation Verbal testers prefer textual representations from where they draw their tests
CSTM Ver 4.0 Module # 5 Test Management 114
Inductive/Deductive
Inductive testers gather specifics like techniques, defect history, change history and generalize them to the application under test Deductive testers keep a collection of traditional techniques and determine how to apply to the specific domain. Deductive testers gain familiarity with classification methods and then do their classification while inductive testers find new methods of categorizing them.
115
Active/Reflective
Active testes generally do hands-on testing and rapidly perform many test cases. She prefers to work in a group Reflective testers is apt to do fewer tests as a thought process precedes each test They make up for lack of speed by doing good tests that others are not likely to find. They generally like to work alone
CSTM Ver 4.0 Module # 5 Test Management 116
Sequential/Global
A sequential tester starts of at a faster pace and goes step by step in the plan. He will be able to explain his tests with clarity A global tester starts of late and may require some hand holing in understanding the application However after the initial phase he will be able to quickly create detailed complex tests.
CSTM Ver 4.0 Module # 5 Test Management 117
Summary
119
Dashboards
database Defect reporting Defect analysis Level 3 - Defect Management analyzers Walkthroughs Inspections Acceptance test Unit test Integration test System test Ver 4.0
Code
Level 2 - Verification
Level 1 - Validation
Module # 5 Test Management 121
CSTM
What do we achieve
Repeated success in testing activity over subsequent projects, based on metrics collected and lessons learnt. Early detection of errors, at lower cost. Focussed testing effort (as a result of risk assessment and prioritization) More comprehensive testing by using automation.
Satisfied customer.
CSTM Ver 4.0 Module # 5 Test Management 122