You are on page 1of 24

Greg Reiser,

Updated November 18
ISG SW Engineering - Lean Enterprise /Agile Transformation

Proposed Program Dashboard
22 November 2011

"Our highest priority is to satisfy the customer through
early and continuous delivery of valuable software.

"Working software is the primary measure of progress.
Source - Dave Nicolette
Guiding Principles for Metrics
Measure outcomes, not activity.

Various authors
Agile Balanced Metrics (Forrester)
Operational Excellence
Project Management
Organizational Effectiveness
User Orientation
User Satisfaction
Responsiveness to needs
Service Level Performance
IT Partnership
Business Value
Business value of projects
Alignment with strategy
Synergies across business units
Future Orientation
Development capability improvement
Use of emerging processes and
Skills for future needs
Agile Balanced Metrics (NCR)
Operational Excellence
New-Functionality Ratio
Internal Code Quality
Build Hygiene
Percent Accurate and Complete
Defect Resolution Time
User Orientation
Customer Satisfaction
External Customers
Internal Customers
Business Value
Feature Lead Time
Cost per Feature Point
Future Orientation
Teams Agile Maturity
Number of Agile Practitioners
Number of Agile Leads
Future Orientation Agile Maturity
What it is Measure of team agility (ability to respond to customer demand and
change) based on ThoughtWorks Agile Maturity Model.
Measurement Qualitative assessment along 10 dimensions of software development

Purpose Assess how teams are progressing towards a targeted future state
Caveat The AMM is not a compliance tool. Its intent is to define the current state of a
software development team with respect to agile principles and practices, develop a
plan for change and track progress against that plan.
Future Orientation Agile Practitioners and Leaders
What it is The number (and rate of increase) of qualified Agile Practitioners and
Leaders in ISG Software Engineering
Measurement Use a variation of Net Promoter Score to assess individuals
On a scale of 0 to 10, this person does not require coaching support in order to be a positive
contributor on an agile team.
On a scale of 0 to 10, this person is effective as an agile coach for one or more roles

Initial assessments will be performed by ThoughtWorks coaches. As NCR staff achieve Practitioner
and Leader status they will assume this responsibility
Purpose Monitor the rate at which NCR staff are developing agile expertise. Ensure
that NCR are on track to developing the skills required to support broad agile adoption
Project Experience
Focused Coaching
Ongoing Support
Operational Excellence New Functionality Ratio
What it is The ratio of effort (cost) spent on new feature-functionality vs. support

Purpose Well-run Agile teams generate higher quality and fit-for-purpose
functionality, and minimize the amount of effort spent on marginal value features. This
translates into lower failure, customer service and other support costs. This in turn
translates into increased capacity to innovate and satisfy customer demand.
When developing new functionality, the cost of defects discovered at the end of the development
lifecycle should be counted as Support (Appraisal or Internal Failure costs)
When developing new functionality, the cost of defects discovered earlier in the development cycle
should be counted as New Functionality costs (Prevention costs)
Defects reported by consumers of shared components (Product and Solution Teams) should be
treated as Support (External Failure costs) by Component Teams
Operational Excellence Internal Code Quality
What it is Metrics that describe the design quality of the software in a product
The four Cs: Coverage, Complexity, Cohesion, Coupling
See Appendix for details
Purpose Compares a codebase to generally accepted guidelines for good design.
Identifies opportunities for making software more malleable. Increasing adherence to
such guidelines plus decreasing Defect Resolution Time and shorter Feature Lead Time
are all indicators of reducing technical debt.
Tools such as Sonar can collect and report on a broad range of metrics; its better to
focus on a small subset and adapt with caution as you learn how to use metrics to drive
desired behavior.
Operational Excellence Internal Code Quality:
Dashboard Example
Operational Excellence Build Hygiene
What it is Number of builds and Percentage of successful builds in a given timeframe.
This can be easily monitored through a Sonar plugin.
Number of builds (in a given timeframe)
Build Success % (in a given timeframe)

Purpose Drive a desired development behavior
Number of builds = 15 builds in the last 30 days
Build Success = 86.7% in the last 30 days
Operational Excellence Percent Accurate and Complete
What it is Percent of stories that do not revert to an earlier state
(Ideal State Transitions / Total State Transitions) x 100

Purpose Indicator of rework (waste). Measuring at the individual state transition level
identifies opportunities for continuous improvement.
Story Lifecycle: Not Started Analysis Development Testing Acceptance Testing Deployed
Size of Backlog = 100 Stories; Hence, ideal number of forward state transitions for entire project is
500 (5 x 100)
Actual Experience:
20 instances of stories reverting from UAT to Testing
50 instances of stories reverting from Testing to Development (may include many of the above 20)
PAC (Dev-to-Test) = (100/150) x 100 = 67%
PAC (Test-to-UAT) = (100/120) x 100 = 83%
PAC (Project) = (500/570) x 100 = 88%
Operational Excellence Defect Resolution Time
What it is Average time between defect identification and resolution
Defect Closed Timestamp Defect Open Timestamp

Purpose Demonstrates the malleability of the codebase and the extent to which the
team has adopted a zero-defect culture
Malleability of the Codebase Disciplined agile teams strive to limit technical debt as
much as possible. Technical debt is assessed at two levels, defects and code metrics
that indirectly describe how easy it is to maintain and enhance the software
(malleability). Rapid resolution of defects is one measure of the business benefit of low
technical debt.
Business Value Feature Lead Time
What it is The average time to deploy a feature from the time that the development
team begins work on it
Date Feature Deployed Date Analysis begins on first relevant story
Deployed = Feature is Shippable
Shippable (Component Teams) = Binary is fully tested and available for consumption by Product Teams
Shippable (Product Teams) = Feature is part of a fully-tested release. Customer deployment is strictly a
business decision.
Purpose Measures the responsiveness of the team once a feature has been identified
as the next highest priority.
Why not start measurement when the feature is first identified?
It is much more important to be responsive with respect to higher priority features. If the metric
starts at feature identification time teams will be tempted to work on the easiest features regardless
of priority.
If consumers are not receiving high quality components fast enough to meet their business
commitments they dont need a metric to tell them that. If the root cause is determined to be high
priority features queuing up to get started, this indicates an obvious capacity issue rather than a
process issue.
Business Value Defects
What it is Number of defects reported after a story has been flagged as Done by the
testers that are embedded in the development team
Consider the following story life cycle:
New In Analysis Ready for Dev In Dev Ready for Test In Test Done
Track defects reported by any downstream activities (e.g., component integration test, controlled
deployment, professional services, external customer, etc.)
Raw number of defects reported per severity level and time (activity) of detection
Report in terms of density (per KLOC) and technology stack when comparing across projects
Purpose One indicator of the quality of software produced
Why not record defects identified earlier in the story lifecycle?
Testing within a Sprint is a defect prevention cost (up-front acceptance criteria and TDD are other
examples of defect prevention). We want to encourage defect prevention by focusing on the
expected reduction in defect appraisal, internal failure and external failure costs. Hence the focus on
defects that are indicators of those other quality costs.

Business Value Cost per Feature Point
What it is Cost per deployed unit of business value, where units are feature points
as defined by the Product Owner
Development-Cost / Feature-Points-Deployed
Development Cost = Development costs incurred within a specific time frame. If comprehensive
development costs are difficult to obtain, hours of effort may serve as a reasonable proxy.
Feature Point = Relative units of business value for a feature as determined by the Product Owner
(Solution Manager)
Feature-Points-Deployed = Sum of feature points for those features deployed (shippable) during the
target time frame
Purpose The direction of change indicates if teams and the organization are becoming
more or less productive
Recommendation Trend is more important than raw value. If used to compare teams,
limit comparison to teams that serve the same line of business. Since feature points are
subjective values assigned by Product Owners, comparisons are only valid where there
is consistency amongst people that work together.
User Orientation Customer Satisfaction
What it is Net Promoter Score (NPS) for NCR, ISG Software Engineering and
individual project teams
Use the Net Promoter Score methodology ( with the
following customer groups:
External Customers (e.g., Kohls, Toys R Us, Tesco, etc.)
Professional Services and ISG Solution Management That is, the customers of ISG Software
Consumers of Component Team products For example, the Vision, Travel Air and SSCO teams are
consumers of components develped by the P24 team
Purpose Simple way to collect and monitor how well ISG Software Engineering is
responding to the needs of its customers at multiple levels. Reinforces a customer
centric culture even for teams that are several levels detached from NCRs external
Operational Excellence Internal Code Quality:
What it is a measurement of the extent to which lines and branches
are executed as part of some test.

Lines of code reached by a unit test / Executable lines of code

(Branches that evaluate to True at least once + Branches that
evaluate to False at least once) / (2 * Total number of branches)

Purpose indicates how much code isnt executed by tests

Operational Excellence Internal Code Quality:
What it is McCabe Metric or Cyclomatic Complexity Number; the
number of independent flow-paths through a method/function.

(no. of branches in method: if, for, &&, ||, etc.) + 1

Purpose a quantitative indicator of the complexity of code
Example a method with a CCN of 3

Operational Excellence Internal Code Quality:
What it is A measure of single responsibility principle of a class.
High cohesion is a good thing

Number of Connected methods and fields

Purpose Measures the number of connected components in a class

private final String privKey = readSecurelyFromSomewhere();
private String encode(String plainText, String pubKey) { }
private String decode(String encrypted, String pubKey) { }

public void login(String uname, String pwd) { }
public void logout(String uname) { }
Operational Excellence Internal Code Quality: Coupling
What it is a measure of dependency both of and on a class. Low
coupling is a good thing

Afferent number of other classes that use this class
Efferent number of other classes used by this class

Purpose Measures the number of adjacent classes in dependency tree

T clone<T> (T me)

Afferent: Arrive at class
Efferent: Exit class
Remember the Future Exercise
Partnership Quality
Time to market
Higher value / lower cost
Reduce waste (marginal
value features)
Clear backlog short/long
Less rework; shorter cycle
Dollar value of software
Clarity of customer
Understanding of goals
Reusing software across
Sustainable rhythms
Being innovative
New product ideas
introduced into products
Metrics Template
What it is
Scorecard Graphic
Operational Excellence
User Orientation Business Value
Future Orientation