You are on page 1of 3

Different Software Quality Metrics used by Expert Test Managers Article by: Kushal Kar & Swastika Nandi

- Guest Publishers of the article. Software Metric is a generic name for the measure of Quality of the Software Product. Software Metric can be a reflection on the status under software development cycle or some results etc. A good project manager is the one who applies the principles of metrics to plan, organize & control the project deliverables in quantifiable / measurable terms. Some of the software metrics extensively used by ISTQB certified expert testing managers are described below. Sr. 1 Description of Metric Test Coverage How to Measure (Formulae) Number of units (KLOC/FP) tested / total size of the system No. of defects found during Testing/(No. of defects found during testing + No of acceptance defects found after delivery) *100 {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100 {(Actual Duration - Estimated Duration)/Estimated Duration} *100 t/(t+UAT) Explanation: Here "t" is the total number of defects reported during the testing, whereas UAT, means the total number of defects that are reported during the user acceptance testing. 6 Defect Density No of Defects / Size (FP or KLOC) Explanation: Here FP = Function Points 7 Weighted Defect Density (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects) Explanation: Here 5,3,1 corresponds to the severity of the detect 8 Schedule Slippage (Actual End date - Estimated End date) / (Planned End Date Planned Start Date) * 100 (Actual review effort spent in that particular phase / Total actual efforts spent in that phase) * 100 {1 - (Total number of changes /number of initial requirements)} (Total Number of requirements added/Number of initial requirements) * 100

Quality of Testing

Effort Variance

Schedule Variance

Test Effectiveness

Rework Effort Ratio

10

Requirement Stability Index Requirement Creep

11

12

Correctness

Defects / KLOC or Defects / Function points

13

Maintainability

MTTC (Mean time to change) -- Once error is found, how much time it takes to fix it in production. Integrity = Summation [(1 - threat) X (1 security)] User questionnaire survey results will give an indication of the same. Comment: How easy its for the users to use the system and how fast they are able to learn operating the system.

14

Integrity

15

Usability

16

CSAT Customer Satisfaction Index

Call volume to customer service hotline Availability (percentage of time a system is available, versus the time the system is needed to be available) # Mean time between failures (MTBF) Total operating time divided by the number of failures. MTBF is the inverse of failure rate. # Mean time to repair (MTTR) - Total elapsed time from initial failure to the reinitiating of system status. Mean Time To Restore includes Mean Time To Repair (MTBF + MTTR = 1.) # Reliability ratio = (MTBF / MTTR) Explanation: Reliability is the probability that an item will perform a required function under stated conditions for a stated period of time. The probability of survival, R(t), plus the probability of failure, F(t), is always unity. Expressed as a formula : F(t) + R(t) = 1 or, F(t)=1 - R(t). # Mean time between failure (MTBF) # Mean time to repair (MTTR) # Reliability ratio (MTBF / MTTR)

17

Reliability

18

Defect Ratios

# Defects found after product delivery per function point. # Defects found after product delivery per LOC # Pre-delivery defects: annual post-delivery

defects # Defects per function point of the system modifications 19 Number of Tests per Unit Size Acceptance Criteria Tested Defects Per Size Testing Cost Cost to locate defect Number of test cases per KLOC/FP

20

Acceptance criteria tested / total acceptance criteria Defects detected / system size Cost of testing / total cost *100 Cost of testing / the number of defects located Actual cost of testing / Budgeted cost of testing Defects detected in testing / total system defects Defects detected in production/system size

21 22 23

24

Achieving Budget

25

Defects detected in testing Defects detected in production Quality of Testing

26

27

No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100 = Loss due to problems / total resources processed by the system Number of third party complaints / number of transactions processed Assessment of testing by giving rating in scale of 1 to 10 Number of source code statements changed / total number of tests No of Test cases designed / Actual Effort for Design and Documentation No of Test cycles executed / Actual Effort for testing

28

Effectiveness of testing to business System complaints

29

30

Scale of Ten

31

Source Code Analysis

32

Test Planning Productivity Test Execution Productivity

33

About the Authors: Kushal Kar & Swastika Nandi QA Analysts, are the guest publishers of the article & are solely responsible for the ownership of its contents

You might also like