You are on page 1of 8

Metrics Definitions

Metrics
relationship between two variables. It is a
quantitative measure of the degree o which a
system, component or process, possesses a given
attribute.

Process Metric
A metric used to measure characteristics of the
methods, techniques and tools.

Product Metric
A metric used to measure characteristics of documentation and code.

Software metrics provide a quantitative basis for the development and


validation of models of the software development process. Metrics can be
used to improve software productivity and quality.

to better manage and control their software application environment by


Metrics - Definition monitoring it's performance.
Types of Metrics - Process and Product

S. Metrics Explanation Formula Calculation Tips Interpretation and Causal analysis tips with scenario Example Scenarios
No Name examples
1 Schedule Actual calendar time is compared with theSchedule variance = ((Actual 1. Original planned calendar days Schedule Variance have be analyzed in the following a) Schedule Variance - high; Effort Variance - high; RSI - low;
variance(%) planned calendar time and the Schedule Calendar days - Planned calendar for the project was 200 calendar perspective : Size variance - high;
variance is calculated. days) + Start Variance) / Planned days a) Compare it with Effort variance. If Schedule variance is Reason could be "Changing requirements; Scope creep"
calendar days * 100 2. Actual calendar days for the negligible and effort variance is high, it again boils down to Corrective action could be "Fragment and freeze
To Note : project was 225 calendar days incorrect estimation or low productivity. If Effort variance is requirements; Requirements review; Incremental models"
a. Planned Calendar days is as per the 3. Start variance is the difference negligble and schedule variance is high, it boils down to Preventive action could be " In projects where this is
Plan and Actual Calender days should be between Planned and Actual Start some planning defect in parallel tasks or schedule to effort foreseen, incremental model to be suggested and chunked
the actuals spent. 4. Start Variance was 5 calendar conversions deliverables could be made"
b. This is done for both completed and days b) If both are lineraly increasing - it is incorrect estimation
ongoing states. When ongoing, an 5. Schedule variance = 15.00% with requirements creeping in and size varying b) Schedule Variance - high; Effort Variance - high; RSI -
approximation is made by estimated days The above measure is for the c) Both these variances should also be studied with Defect high; Size Variance - high;
to completion and % completed. project as a whole. density to look at the balance between time and quality Reason could be "Incorrect initial estimation as size variance
has happened even without RC"
Based on the analysis, corrective action to be triggered like: Corrective action could be "To do an estimation now and
a) Revisit and baseline the estimation /plan now which will take that as baseline henceforth"
be the basis for further schedule variances Preventive action could be "Stingent reviews on initial
b) Improve the productivity by training or skill augmentation estimation - 2 phased estimation models etc"
c) Reduce rework by efficient practices in standards
c) Schedule Variance - high; Effort Variance - not high
Note: Continual revisit of plans indicate the deficiency in correspondingly
planning component Reason could be "Time slack in between due to some
dependencies in customer end or some other reviews"
Corrective action could be "To have the dependency time
shortened and push the tasks to completion"
Preventive action could be "To have the dependency
highlighted intially to appropriate stakeholders and owners
and speenden up them as critical path"

d) Schedule Variance - low; Effort variance - high


Reason could be "To meet the deadline - extra effort being
put and incorrect estimation; productivity low"
Analysis should also look at Cost variance being high or not
If productivity is the reason - Skill upgrade is the corrective /
preventive action
If meeting the deadline, extra effort is being put, - Effective
estimation could be the corrective/preventive action

e) For ongoing projects - schedule variance and effort


variance could be negative sometimes
Reason could be - "A few activites would have been planned
- but not started"
Alternatively it could also be "Incorrect estimation"
Based on the reason - CA/PA is done

f) Schedule Variance - high; Effort variance - high; Cost of


Poor quality - high; Defect density - high:
Reason could be the "Extent of defects high in the project
and correspondingly Rework would be high"
Defect density high and rework less could also happen - in
dependencies in customer end or some other reviews"
Corrective action could be "To have the dependency time
shortened and push the tasks to completion"
Preventive action could be "To have the dependency
highlighted intially to appropriate stakeholders and owners
and speenden up them as critical path" Metrics Definitions

d) Schedule Variance - low; Effort variance - high


Reason could be "To meet the deadline - extra effort being
2 Effort Actual effort is compared against the Effort variance = (Actual Effort - 1. Planned Effort for the project Analysis could point to put and incorrect estimation; productivity low"
variance(%) planned effort and the effort variance is Planned Effort)/Planned Effort * was 1000 person days a) Error in initial estimation - could be error in estimating Analysis should also look at Cost variance being high or not
calculated. 100 2. Actual Effort for the project size or the productivity leading to wrong effort estimation If productivity is the reason - Skill upgrade is the corrective /
was 1125 person days b) Producitivity of the team could be low due to new preventive action
To Note : 3. Effort variance = 12.50 % technology, lack of training, rework or insufficient domain If meeting the deadline, extra effort is being put, - Effective
a. Planned effort is the effort estimated knowledge estimation could be the corrective/preventive action
upfront.
b. Actual effort is the effort as per the Based on the analysis, corrective action to be triggered like: e) For ongoing projects - schedule variance and effort
actuals a) Revisit and baseline the estimation /plan now which will variance could be negative sometimes
c. This is done for both completed and be the basis for further effort variances Reason could be - "A few activites would have been planned
ongoing states. When ongoing, an b) Improve the productivity by training or skill augmentation - but not started"
approximation is made by estimated days c) Reduce rework by efficient practices in standards Alternatively it could also be "Incorrect estimation"
to completion and % completed. Based on the reason - CA/PA is done
Note:
Continual revisit of plans indicate the deficiency in planning f) Schedule Variance - high; Effort variance - high; Cost of
component Poor quality - high; Defect density - high:
Reason could be the "Extent of defects high in the project
and correspondingly Rework would be high"
Defect density high and rework less could also happen - in
3 Size variance Actual size is compared with estimated Size variance = (Actual size - Estimated Size is 800 FP Analysis could point to that case, it indicates capability of testing is good and also
size and Size variance is calculated. Estimated Size) / Estimated Size * Actual Size(FP/KLOC) is 1000 FP a) Error in initial size estimation due to insufficient the capability of fixation. If rework is high - then the code has
Project size is the total size of the project 100 Size variance = 25.00 % requirements given by customer (or) insufficient not been sturdy and defects are high.
measured in terms of LOC or Function requirements skill at our end.
points or Feature Points. B) Could also boil down to change in requriements and
scope from customer
To Note :
a. Business model governs when the size Based on the analysis, corrective action would be in the
measure is done - during proposal or after order of :
requriements a) Revisiting the requirements, baselining a new size
b. Use the same measure for planned and estimate and tracking against them (Variance A and B
actuals. concepts)
b) Improving the requirements process and relating to
process models like Incremental or Iterative when creep is
foreseen
Metrics Definitions

4 Requirement Requirement changes occur as the project RSI = 1- ((No of changed + No of No. of Changed is 6 Reason could be any of the following
Stability progresses. Requirement changes are deleted + No of added) / Total no No. of Deleted is 2 a) Insufficient requirements
Index logged and traced to design, code and test of Initial requirements) *100 No. of Added is 4 b) Insufficient requirement skills
cases. For every change in requirement, Total No. of Requirements is 120 c) New product and new premise and hence changes
impact analysis is done for schedule and RSI = 90.00%
effort. Requirement stability index is Corrective and preventive action would be to
calculated to indicate the stability of the a) improve requirements skills, have good requirements
requirements review by domain experts and customers
b) Incremental models and planned change of
To Note : requirements
a. Always Requirement changes have to
be updated.
B. All defects raised in reviews or testing
with type 'RC' have to be incorporated
separately in Change Track sheet and
closed to completion
c. Changes need to be grouped and
counted. Similar changes need to be
grouped and this decision is done by the
PM/PL and person responsible for
Requirements change management in the
project.
d. Higher the stability index value, the
higher is the stability of requirements. As it
nears zero (or) negative values, it shows
requirements are unstable

5 Productivity Productivty of the project is the rate at Productivity in project = Actual Actual Project Size = 1000 FP Productivity variance could be any of the following 1) Effort Variance is high; Productivty is low;
in which the project is made. Total size of the Project Size / Actual Effort spent Actual Effort Spent = 1125 Person a) Skill levels in technology and domain Reason could be "Skills low; Complex domain; New
project(Size project is factored against the total effort for the project days b) Rework due to lack of standards Technology"
measure FP and this is calculated. This is done at the Productivity for the project = 0.89 Corrective action "Training / skill upgrade/ domain/technical
or Feature / end of the project generally. Midway FP/Day Corrective action would be to transfer
Day) through the project, there can be an a) Deploy standards and practices Preventive action could be "To factor for training / transfer in
estimation done on size completed and b) Have good validation mechanisms project planning"
factored against the effort put in. c) Training
2) Effort spent in requirements / design is low ; Productivity is
To Note: low: Rework is high
Producitivty is done as per the size Reason is " No due importance to the mandatory tasks and
measure. Comparison between projects phases"
should be made in similar measures. Corrective and Preventive action could be "To put more effort
in intiial tasks, preparation, review to pave way to the other
tasks"

3) Effort spent in requirements and design high; Productivity


is high; Review efficiency is high; Rework is low;
Reason is "Effort spent in requirements and design
preparation reviews and hence producitivty could be fast in
development"

4) Productivity low; RSI low; Size variance high: Rework high


Requirement changes would have caused the rework and
hence a drop in productivity
low: Rework is high
Reason is " No due importance to the mandatory tasks and
phases"
Corrective and Preventive action could be "To put more effort
in intiial tasks, preparation, review to pave way to the other
tasks" Metrics Definitions

3) Effort spent in requirements and design high; Productivity


6 Productivity Coding Productivity is measured by the Productivity in Coding = Actual Actual Size = 100 KLOC Productivity variance could be any of the following is high; Review efficiency is high; Rework is low;
in coding size of the code (LOC) and the effort spent LOC or FP / Actual Effort spent Effort Spent for Coding = 400 a) Skill levels in technology Reason is "Effort spent in requirements and design
on coding. for Coding Person Days b) Domain complex preparation reviews and hence producitivty could be fast in
Productivity in Coding = 250 c) Rework due to lack of standards development"
To Note : LOC/Day
a. Group it as per Development Corrective action would be to 4) Productivity low; RSI low; Size variance high: Rework high
environment if applicable a) Deploy standards and practices Requirement changes would have caused the rework and
b) Training hence a drop in productivity

7 Productivity Test Case Preparation Productivity is Productivity in Test Case Acutal No. of Test Cases Productivity variance could be any of the following 1) Productivity in TC preparation low;
in Test Case measured by the size of the test case Preparation = Actual No of test prepared = 3000 a) Understanding the Requirements Analysis could be nature of the project where TC count
Preparation (number of test cases) and the effort spent cases / Actual Effort spent on Actual Effort Spent = 50 Person b) Rework due to lack of standards differs
on preparing the test cases. Test Case preparation Days Alternatively, the skill levels of tester would be low
Productivity in Test Case Corrective action would be to CA/PA in case of skill levels would be skill upgrade and
To Note : Preparation = 60 TC/Day a) Deploy standards and practices training in Testing techqniues
a. If there are test cases, test data or test b) Training on Requirements CA/PA in case of different TC count wouldbe to arrive at
programs, have different measures for different units of measure for Test cases and have a goal
them. accordingly

8 Productivity Test Case Execution Productivity is Productivity in Test Case Actual No. of Test Cases = 3000 Productivity variance could be any of the following 1) Producitivty in TC execution is low ; Analysis could be skill
in Test Case measured by the number of test cases Execution = Actual No of Test Adhoc Test Cases = 150 a) Understanding the Test cases levels fo tester (or) nature/complexity of the project.
Execution executed and the effort spent on executing cases (Planned + Adhoc) / Actual Actual Effort Spent on Testing = b) Skill Level Appropriate CA/PA would be skill upgrade (or) arriving at
the test cases. Effort spent on Testing 30 PD c) Test environment is not provided as per the requirements appropriate unit of measure and goal for the project
Productivity in Test Case d) Insufficient Test Data
To Note : Execution = 105 TC/Day
a. If there are test cases, test data or test Corrective action would be to
programs, have different measures for a) Test case design should be simple
them. b) Training
c) Test environment should be made as per the
requirments
d) Proper Test data should be provided.

9 Productivity Defects occur in projects as a result of Productivity in Defect Detection = Actual No. of Review Defects = Defect Detection at earlier stages in the form of various 1) Productivty in Defect detection is low:
in Defect reviews and testing.Productivity in Defect Actual no of Defects (Review + 650 reviews and testing leads to reduce the rework level and to Reason could be Good quality of product (or) Less detection
Detection Detection is measured by the total number Testing) / Actual effort spent on Actual No. of Testing Defects = improve the productivity. efficiency in reviews / testing
of defects detected as a result of reviews (Review + Testing) 450 CA/PA in case of detection deficieny would be to train people
and testing and the effort spent on reviews Actual Effort Spent for Review = in reviews and testing techniques
and testing. 25 PD
Actual Effort Spent for Testing = 2) Productivity high always does not imply the detection
30 PD effieicny alone but also on the nature of the product quality.
Productivity in Defect Detection = Appropriate CA/PA should be done
20
Metrics Definitions

10 Productivity Defects occur in projects as a result of Productivity in Defect Fixation = Actual No. of Defects Fixed = Productivity in Defect Fixation is based on the Defect 1) Productivity in Defect fixation is low:
in Defect reviews and testing.Productivity in Defect Actual No of defects fixed / Actual 1100 Density and severity of the defects. Reason could be qualtiy of the product, coding standards and
fixation Fixation is measured by the total number Effort spent on Defect fixation Actual Effort Spent on Defect the capability of fixation by the developers
of defects fixed as a result of reviews and Fixation = 50 PD CA/PA should be to improve the fixation capability, standard
testing and the effort spent on fixing the Productivity in Defect Fixation = adherence and also on code control
defects found during reviews and testing. 22

11 Schedule Actual calendar time is compared with the Schedule variance for a Phase = Schedule varaince for the various Schedule variance for a phase will be analyzed and proper 1) Schedule variance for a particular phase will be low or
Variance planned calendar time and the Schedule (Actual Calendar days for a phase phases are corrective action will be taken to maintain the schedule for negative ; for a particular phase be high; but the overall
across variance is calculated for each phase. – Planned Calendar days for a Presales is 10.00% the project. variance is not alarming
phases phase + Start Variance for a Requirements is 20.00%
To Note : phase) / (Planned Calendar days Design is 8.00% Reason could be schedule is not planned evenly across
a. Planned Calendar days is as per the for a phase) * 100 Development is 21.43% phases but overall it has evened out. This would cause again
Plan and Actual Calender days should be Testing is 5.00% some problems in phasewise deliverables.
the actuals spent for phase. Acceptance 0.00 %
b. This is done for both completed and Customer implementation 0.00 % CA/PA should be to use the Distribution across phases
ongoing states. When ongoing, an during planning and plan effectively
approximation is made by estimated days
to completion and % completed. 2) When all the variance is high, the analysis mentioned with
respect to overall schedule variance apply here toll

3) Effort variance across phases is also similar to Schedule


variance across phases
12 Effort Actual effort for a phase is compared Effort variance for a phase = Effort varaince for the various Effort variance for the phase will be analyzed and proper
Variance against the planned effort for the phase (Actual Effort for a phase – phases are corrective action will be taken.
across and the effort variance is calculated for the Planned Effort for a phase) / Presales is 7.14%
phases particular phase. (Planned Effort for a phase) * 100 Requirements is 17.65%
Design is 15.38%
To Note : Development is 7.14%
a. Planned effort is the effort estimated Testing is 25.00%
upfront for phase. Acceptance 11.11 %
b. Actual effort is the effort as per the Customer implementation 11.11
actuals for phase. %
c. This is done for both completed and
ongoing states. When ongoing, an
approximation is made by estimated days
to completion and % completed.

13 Effort Project is planned as per the Phase – Task Distrbution of effort in each phase Planning is 5% Effort distribution is again the effort to be spent across 1) Distribution would be higher or lower compared to the
distribution – Activity hierarchy. Planned effort and / Total effort * 100 Requirements is 10% phases. Effort spent on testing if high indicates that there is goal.
across actual effort are recorded for each activity, Design is 25% a stress only towards the end of the product. Baselines are
phases task and phase. Phases for a project are Development is 35% arrived and project's performance against these have to be Reason could be Required effort for that phase is not being
decided during commencement of the Testing is 15% measured and tracked. Appropriate corrective action is to put (or) Organization's effort distribution data is incorrect.
project. Effort distribution across phases is Acceptance is 10% improve the planning and have good standards and
calculated at the end of the project and validation procedures. CA / PA should be to put the required effort for the phases if
acts as an input to future projects, further the organization data is valid else look at project goals and
planning and resource allocation. Phases identified as of now are Presales, Requirements, also contribute towards the organization's data in the next
Design, Development, Testing, Acceptance, Customer coming PCB
To Note : Implementation and Across.
Give the distribution as per the Project
phases. There could be a few projects
where the phases are not applicable. Let
them not be shown with a distribution of 0.
Metrics Definitions

14 Effort Project is planned as per the Phase – Task Distrbution of effort in each SDLC Regular is 10% Effort distribution across activity types is distribution via 1) Distribution would be higher or lower compared to the
Distribution – Activity hierarchy. Planned effort and ActivityGroup / Total effort * 100 Review is 10% different parameter - activity type. This is not restrained to a goal.
across actual effort are recorded for each activity, Test is 15% particular phase but restrained by the nature - production,
Activity task and Activity Type. Activities are Rework-Review is 10% appraisal (review and testing), rework, configuration, Reason could be Required effort for that activity type is not
Types grouped by Activity Types as per the Rework-Test is 10% measurement etc. The activity type classification is driven being put (or) Organization's effort distribution data is
organization standard. Verification-Review is 5% by what all we want to measure and as data matures, we incorrect.
Verification-Test is 5% can evaluate the heads - we could club them, separate
To Note: Quality Assurance is 5% them as we proceed CA / PA should be to put the required effort for the activty
This metric could be very useful when you Configuration mgt is 5% types if the organization data is valid else look at project
have to measure the effort spent on a few Measurement is 10% Activity types identified at this stage are SDLC Regular, goals and also contribute towards the organization's data in
heads like Measurement, Miscellaneous, Marketing is 5% Reveiw, Test, Rework-Review, Rework-Test, Verification- the next coming PCB
Quality assurance etc which do not exactly Training is 10% Review, Verification-Test, Quality assurance, Configuration
fall under a phase. Management, Measurment, Marketing and Training. A few activity types can be analyzed with scrutiny:

a) Effort spent in QA and Measurement high: Could be due


to increase in Quality facilitation activities / process changes
15 Cost of Cost of Quality is the total effort spent for Cost of Quality = (Review + Testing Effort = 80 PD Cost of quality is just an indicator to see how much effort b) Effort spent in Reviews low; Need to compare with review
Quality Testing, Reviews, QA, Measurement, Testing + Verification-Review + Review Effort = 100 PD has gone in SDLC process other than regular activity. efficiency; if that is low - then CA need to be increase in the
Training and Configuration Management Verification Testing + QA + Config QA Effort = 30 PD Should neither be high or too low. This has to be studied Review effort. If review effciency and overall defect removal
Mgmt + Measurement + Training Measurement Effort = 20 PD over a period and trend analyzed efficiency is high - then with less review effort itself - the
+ Re-Work Review + Re-Work Training Effort = 20 PD result has been good.
Testing) / Total Effort * 100 Config Mgt Effort = 20 PD c) Effort spent in Rework is high: Need to look at Defect
Cost of Quality = 24.00 % density or RSI. If defect density is high, rework would have
been due to defects. If RSI is low and size variance is high,
rework could be due to changing requirements
d) Effort spent in CM, PM all need to reflect organziaton and
16 Cost of Poor Cost of Poor Quality is the total effort spent Cost of Poor Quality = Rework Actual Effort Spent for Cost of Poor quality is an indicator to see how much effort best practices
Quality for rework done for reviews, testing and Effort / Total Effort * 100 Rework on Testing = 40 PD has gone in rework and other failure fixations
verification. Rework on Reviews = 40 PD
Rework on verification = 50
persondays
Cost of Poor Quality = 11.5 %

17 Defect Defects occur in projects as a result of Defect density for the project = Total No of Defects = 1100 Defect density could be due to a) Defect density higher than the organization goal - Reason
density reviews and testing. Total number of defects / Project Project Size = 200 KLOC or 1000 a) Lack of standards and good practices could be quality of the product is low (or) defect detection
Defect density for the project is defined as size in KLOC or FP FP b) Skill levels of members efficiency is very high. Need to compare with defect detection
the total number of defects per project size Defect Density = 5.5 c) Schedule demands and the incapability in planning to producitivty and compare
(KLOC/FP). Defect density for each work Defects/KLOC or 1.1 Defects/FP balance it with good quality
product = Total number of defects CA/PA need to be strengthening the review process, the
To Note : / Work product size measure Corrective action would appropriately lead to preparation process, following standards and hence the
Could be at project level or at work product a) Deployment of standards and stingent review processes defect density could be kept low
level. It is just the same formula with at early stages
different applicability. b) Training

18 Residual RDD is defined as the no of defects caught Residual Defect Density = (No. of No of Defects in Acceptance = 50 Residual defect density could be due to a) RDD being high - indicates internal detection effieicny is
Defect in Acceptance per project size (KLOC/FP). Defects caught in Project Size = 200 KLOC or 1000 a) Product health not good not sufficient.
Density Acceptance/Project Size) FP b) Deficiency in Testing
RDD = 0.25 Defects/KLOC or c) Less time and effort given to internal testing Reason could be parallel testing, releases to customer; test
0.05 Defects /FP d) Shipment of wrong version cases and review techniques deficient (or) complex domain
e) Parallel testing CA/PA would be to have good testing process / techniques
Corrective action would be in the order of as per the process model, training the testers in the complex
a) Good detection pracitces inside domain /customer premise
b) Early testing

Sometimes as part of process models and deliverable


methodologies, intermediate deliverables would be agreed
upon and this would lead to issues given by customer -
Residual defect density causal analysis could highlight it
and also show the corrective action taken
Metrics Definitions

19 Defect age Defect age is the difference between the Defect age = sigma (Phase Defect Introduced: During Defect age is the average span of time between defect a) Defect age high:
Phase detected and Phase introduced. detected – Phase introduced) ^2 / Requirements - Phase 2 origination and defect fixation. This along with DRE Reason is defect detection does not happen in the phase
Defect age is calculated for a defect and Total no of defects Defect Detected: During Testing - phasewise shall give the interpretations in each process that is occurred. Appropriate review mechanisms would be
the average defect age is calculated for all Phase - 5 review efficiency. deficient and the entire detection would happen in testing
the defects. Defect Age is calculated by using CA/PA would be to improve and control phasewise detection
this phase difference. efficiency, set goals for defects in each phase and track
To Note : them.
Phase detected has to be the phase
sequence of the project. Phase introduced
is also the sequence of the phase and
should identfiy clearly which phase would
have introduced this defect

20 Review Defects are detected via two detection Review Efficiency = (Number of Total no. of defects in Reviews = Review efficiency low or vairant towards baseline indicates a) Review effiiciency - low:
Efficiency(%) mechanism, Reviews and Testing. Review defects caught in reviews) / (Total 650 a) Insufificent review processes upfront Reason could be effort spent in reviews low (or) review skills
efficiency is the number of defects number of defects caught ) * 100 Total No of Defects in (Reviews + b) Would have been sacrificed due to time pressures low.
detected in Reviews compared to the total Testing) = 1100 CA/PA would be to increase the effort and also the skill
defects. This metric along with the Phase Review Efficiency = 59.09 % Corrective action would be to sneeded for reviews
occurred could determine the strength of a) Plan for reviews upfront whatever the time frames be
the review process. b) Have parallel tasks but not compromise on reviews

21 Testing Defects are detected internally in reviews Testing Efficiency = 1-((Defects Total no. of defects in Testing = Testing Efficiency indicates efficiency of the Testing Same as Review effienccy
Efficiency(%) and testing. Testing efficiency indicates found in Acceptance) / Total No of 450 process. It can be analysed by finding the defects caught in
how many defects are detected in Testing Testing Defects) * 100 No of Defects in Acceptance acceptance by customer with Total no of defects caught in
before being detected at customer site. Testing = 50 Testing.
Testing Efficiency = 88.88 %

22 Overall Defects are detected internally in reviews Defect Removal Efficiency = (1- No of Defects caught by customer Overall Defect removal efficiency captures the capability of Same as RDD
Defect and testing. Defect Removal efficiency (Total defects Caught by during Review, Acceptance and defects caught internally compared to the customer
Removal indicates how many defects are detected customer / Total No of Defects)) * Implementation = 80 Reasons for less DRE could be
Efficiency(%) before being detected at customer site. 100 a) Less time and effort planned for testing
This inclusive of Review and Testing. Total no. of defects in Review and b) Last minute fixes
Testing = 1100 c) Wrong build shipped etc

Defect Removal Efficiency is


92.72 %

23 Phasewise Defects detected, occurred and escaped in Depending on the Defects Phasewise detection effieincy If phasewise detection effiency is low compared to the Both these metrics and defect age concentrate on the
Detection each phase are calculated and phasewise detected in each phase Requirements - 50% organization goal, review or testing in that phase is not detection effiency, estimated vs actual defects comparison in
Effieincy detection effiency is arrived at. compared to the incoming Design - 45% effienct each phase
escapes and overall escapes
If phasewise detection is low compared to the organization
data, review / testing mechanism need to strengthen
24 Defect Defects are analyzed based on various % of defects across various Defect Distribution across various Defect distribution across phases indicate when defects are
Distribution phases. Number of defects in each phase phases phases caught. A baseline or goal can be set that the defects need Estimated defects need to be analyzed with the actual
across is proportioned with the total number of Planning is 5% to be caught to a greater percentage in requirments and defects throughout and appropriately, the CA/PA should be
phases defects and the ratio is given. This metric Requirements is 10% deisgn so that the cost of fixation is less. Then analysis has taken in that phase
is applied on the total defects in the project Design is 25% to be done on why the percentage distribution varied.
in reviews and testing. Development is 35% Reason could be only the validation process in the
Testing is 15% appropriate phases and the corrective action is to have a
Acceptance is 10% uniform valdiation plan acorss phases and not restrict to
testing.
Metrics Definitions

25 Defect Defects are analyzed based on severities. % of defects across various Defect Distribution across various Based on the severities, how the defects are distributed is a) No goals specificlaly for these metric and analysis should
Distribution Number of defects for each item in the severities severities measured and analysed. just look at which type of defect I smore and apprpriately
across classification is proportioned with the total Severe 30% concentrate on testing / reviews in that area
severities number of defects and the ratio is given. Moderate 40%
This metric is applied on the total defects Cosmetic 30% b) Cause wise analysis would help in correctivng the
in the project in reviews and testing. appropriate cause and arrivign at an appropriate defect
prevention action for it.

26 Defect Defects are analyzed based on various % of defects across various types Defect Distribution across various Based on the types, how the defects are distributed is
Distribution types. Number of defects for each item in types measured and analysed.
across types the classification is proportioned with the Alignment 25%
total number of defects and the ratio is Coding 30%
given. This metric is applied on the total Functional 20%
defects in the project in reviews and RC 15%
testing. Others 10%

27 Defect Defects are analyzed based on various % of defects across various Defect Distribution across various Based on the causes, how the defects are distributed is
Distribution causes. Number of defects for each item in causes causes measured and analysed.
across the classification is proportioned with the Lack of Standards 20%
causes total number of defects and the ratio is Process Deficiency 10%
given. This metric is applied on the total Lack of Training 10%
defects in the project in reviews and Schedule Pressure 15%
testing. New Technology 10%
Clerical Error 5%
Others 10%
Lack of Domain 20%

You might also like