Professional Documents
Culture Documents
MANAGEMENT
PRODUCTIVITY MEASUREMENT
TEAM MEMBERS:
Apoorva Jain
Tarun Daga
PRODUCTIVITY
There are many different productivity measures. The choice between them
depends on the
purpose of productivity measurement and, in many instances, on the availability
of data. Broadly,
productivity measures can be classified as single factor productivity measures
(relating a measure of
output to a single measure of input) or multifactor productivity measures
(relating a measure of output to a bundle of inputs). Another distinction, of
particular relevance at the industry or firm level is between productivity
measures that relate some measure of gross output to one or several inputs and
those which use a value-added concept to capture movements of output.
Table 1 uses these criteria to enumerate the main productivity measures. The
list is incomplete in so far as single productivity measures can also be defined
over intermediate inputs and labour-capital multifactor productivity can, in
principle, be evaluated on the basis of gross output. However, in the interest of
simplicity, Table 1 was restricted to the most frequently used productivity
measures. These are measures of labour and capital productivity, and
multifactor productivity measures (MFP), either in the form of capital-labour MFP,
based on a value-added concept of output, or in the form of capital-labour-
energy-materials MFP (KLEMS), based on a concept of gross output. Among
those measures, value-added based labour productivity is the single most
frequently computed productivity statistic, followed by capital-labour MFP and
KLEMS MFP.
These measures are not independent of each other. For example, it is possible to
identify various driving forces behind labour productivity growth, one of which is
the rate of MFP change. This and other links between productivity measures can
be established with the help of the economic theory of production.
The following will give the review of the five most widely used productivity
concepts. They point out the major advantages and drawbacks and briefly
interpret each measure.
Labour Productivity (based on gross product)
Labour Productivity (based on value added)
Capital-Labour MFP (based on value added)
Capital Productivity(based on value added)
Economic Growth and Productivity
Total productivity = Output quality and quantity / Input quality and quantity
production process
monetary process
Real process generates the production output, and it can be described by means
of the production function. It refers to a series of events in production in which
production inputs of different quality and quantity are combined into products of
different quality and quantity. Products can be physical goods, immaterial
services and most often combinations of both. The characteristics created into
the product by the manufacturer imply surplus value to the consumer, and on
the basis of the price this value is shared by the consumer and the producer in
the marketplace. This is the mechanism through which surplus value originates
to the consumer and the producer likewise. Surplus value to the producer is a
result of the real process, and measured proportionally it means productivity.
The production process consists of the real process and the income distribution
process. A result and a criterion of success of the production process is
profitability. The profitability of production is the share of the real process result
the producer has been able to keep to himself in the income distribution
process. Factors describing the production process are the components of
profitability, i.e., returns and costs. They differ from the factors of the real
process in that the components of profitability are given at nominal prices
whereas in the real process the factors are at fixed prices.
Productivity Model
The Challenge of Productivity Measurement
(in context to the software industry- case study)
Abstract
In an era of tight budgets and increased outsourcing, getting a good measure of
an organization’s productivity is a persistent management concern.
Unfortunately, experience shows that no single productivity measure applies in
all situations for all purposes. Instead, organizations must craft productivity
measures appropriate to their processes and information needs. This article
discusses the key considerations for defining an effective productivity measure.
It also explores the relationship between quality and productivity. It does not
advocate any specific productivity measure as a general solution.
Introduction
A productivity measure commonly is understood as a ratio of outputs produced
to resources consumed. However, the observer has many different choices with
respect to the scope and nature of both the outputs and resources considered.
For example, outputs might be measured in terms of delivered product or
functionality, while resources might be measured in terms of effort or monetary
cost. Productivity numbers may be used in many different ways, e.g., for project
estimation and process evaluation. An effective productivity measure enables
the establishment of a baseline against which performance improvement can be
measured. It helps an organization make better decisions about investments in
processes, methods, tools, and outsourcing. In addition to the wide range of
possible inputs and outputs to be measured, the interpretation of the resulting
productivity measures may be affected by other factors such as requirements
changes and quality at delivery. Much of the debate about productivity
measurement has focused narrowly on a simplistic choice between function
points and lines of code as size measures, ignoring other options as well as
many other equally important factors. Despite the complexity of the software
engineering environment, some people believe that a single productivity
measure can be defined that will work in all circumstances and satisfy all
measurement users’ needs. This article suggests that productivity must be
viewed and measured from multiple perspectives in order to gain a true
understanding of it.
International Standards
One might hope to look to the international standards community for guidance
on a common industry problem such as productivity measurement. While some
help is available from this direction, it is limited. The most relevant resources are
as follows:
• SEI technical reports discuss how to define effort [12] and size measures
[13], but give little guidance on how they can be combined to compute
things such as productivity. Thus, the SEI reports discuss considerations in
defining base measures (using the ISO/IEC Standard 15939 terminology),
while IEEE Standard 1045 suggests methods of combining base measures
to form derived measures of productivity. Note that none of these
standards systematically addresses the factors that should be considered
in choosing appropriate base measures and constructing indicators of
productivity for specific purposes.
Figure 1: Levels of a Measurement Construct
The Concept of Productivity
The simple model of Figure 2 illustrates the principal entities related to the
measurement and estimation of productivity. A process converts input into
output consuming resources to do so. We can focus on the overall software
process or a sub process (contiguous part of the process) in defining the scope
of our concern. The input may be the requirements statement for the overall
software process and for the requirements verification sub process, or the
detailed design for the coding process (as another example). Thus
(requirements) input may consist of initial product requirements or previous
work products provided as input to a sub process. In this model the
“requirements” are relative to the process or sub process under consideration.
Using this model, the numerator of productivity may be the amount of product,
volume of requirements, or value of the product (that is, things that flow into the
process or sub process). The denominator of productivity may be the amount or
cost of the resources expended. The designer of a productivity measure must
define each of the elements of the model in a way that suits the intended use
and environment in which the measurement is made.
Size Measurement
This section describes the two most common methods for measuring size – the
numerator of the productivity equation. These are Function Points and Lines of
Code. Function Points is a functional
(input) size measure, while Lines of Code is a physical (output) size measure.
Lines of Code
Perhaps, the most widely used measure of software size is Lines of Code. One of
the major weaknesses of Lines of Code is that it can only be determined with
confidence at project completion. That makes it a good choice for measuring
productivity after the fact, however. The first decision that must be made in
measuring Lines of Code is determining what to count. Two major decisions are
1) whether to count commentary or not, and 2) whether to count lines or
statements. From the productivity perspective, comments require relatively little
effort and add no functionality to the product so they are commonly excluded
from consideration.
• The choice between lines and statements is not so clear. “Line” refers to a
line of print on a source listing. A statement is a logical command
interpretable by a compiler or interpreter. Some languages allow multiple
logical statements to be placed on one line. Some languages tend to
result in long statements that span multiple lines. These variations can be
amplified by coding practices. The most robust measure of Lines of Code
is generally agreed to be “non-comment source statements”.
• New – 100%
• Modified – 40 to 60%
• Reused – 20 to 40%
Ideally the weights are determined by the analysis of historical data from the
organization. The concept of Equivalent Source Lines of Code makes it possible
to determine the productivity of projects with varying mixes of software sources.
Alternatively, some general adjustment factor can be applied to the software
size as a whole to account for reuse and other development strategies.
However, that captures the effect less precisely than counting the lines from
different sources separately.
Resource Measurement
The denominator, resources, is widely recognized and relatively easily
determined. Nevertheless, the
obvious interpretation of resources (whether effort or monetary units) can be
misleading. The calculation of productivity often is performed using only the
development costs of software. However, the magnitude of development
resources is somewhat arbitrary. The two principal considerations that must be
addressed are 1) the categories of cost and effort to include and 2) the period of
the project life cycle over which they are counted.
Four categories of labour may be considered in calculating productivity:
engineering, testing, management, and support (e.g., controller, quality
assurance, and configuration management). Limiting the number of categories
of labour included increases the apparent productivity of a project. Calculations
of productivity in monetary units may include the costs of labour as well as
facilities, licenses, travel, etc. When comparing productivity across organizations
it is essential to ensure that resources are measured consistently, or that
appropriate adjustments are made.
Requirements Churn and Quality at Delivery
Figures 3a, 3b, and 3c illustrate the effect of the period of measurement on the
magnitude of the resource measure. These figures show the resource profile
(effort or cost) for a hypothetical project broken into three categories:
production, rework, and requirements breakage. Requirements breakage
represents work lost due to requirements changes. This may be 10 to 20 percent
of the project cost. Rework represents the resources expended by the project in
repairing mistakes made by the staff. Rework has been shown to account for 30
to 50 percent of the costs of a typical software project [5]. Usually, rework effort
expended prior to delivery of the product is included in the calculation of
productivity, while rework after delivery usually is considered “maintenance”.
However, this latter rework is necessary to satisfy the customer.
The project in Figure 3b is similar in every other respect except that it delivered
later and had more time to fix the identified problems. This latter project would
be judged to have “lower” development productivity, although the total life cycle
costs of ownership would be very similar for the two projects. Thus,
development cost (and consequently apparent productivity) are affected by the
decision on when and under what conditions the software is to be delivered. The
true productivity of the two projects is essentially identical.
Comparing Figures 3b and 3c show the impact of requirements churn. While the
two projects deliver at the same time, and so exhibit the same apparent
productivity, they may experience different amounts of requirements breakage.
The project in Figure 3b, with the larger requirements breakage actually has to
produce with a higher “real” productivity to deliver the same output as the
project in Figure 3c where requirements breakage is lower.
Figure 3a – Apparent “High” Productivity Project
Physical Productivity
Functional Productivity
This is a ratio of the amount of the functionality delivered to the resources
consumed (usually effort). Functionality may be measured in terms of use cases,
requirements, features, or function points (as appropriate to the nature of the
software and the development method). Typically, effort is measured in terms of
staff hours, days, or months. Traditional measures of Function Points work best
with information processing systems. The effort involved in embedded and
scientific software is likely to be underestimated with these measures, although
several variations of Function Points have been developed that attempt to deal
with this issue.
Economic Productivity
This is a ratio of the value of the product produced to the cost of the resources
used to produce it. Economic productivity helps to evaluate the economic
efficiency of an organization. Economic productivity usually is not used to
predict project cost because the outcome can be affected by many factors
outside the control of the project, such as sales volume, inflation, interest rates,
and substitutions in resources or materials, as well as all the other factors that
affect physical and functional measures of productivity. However, understanding
economic productivity is essential to making good decisions about outsourcing
and subcontracting. The basic calculation of economic productivity is as follows:
Ideally, the revenue stream resulting from a software product represents its
value to the customer. That is, the amount that the customer is willing to pay
represents its value. Unfortunately, the amount of revenue can only be known
when the product has finished its useful life. Thus, the value must be estimated
in order to compute economic productivity, taking into consideration all the
factors affecting the customer’s decision to buy. Thus,
Poor quality may result in warranty and liability costs that neutralize revenue.
Similarly, time must be considered when determining the economic value of a
product - a product which is delivered late to a market will miss sales
opportunities. Thus, the amount of revenue returned by it will be adversely
affected. Consequently, the calculation of value for economic productivity must
include timeliness and quality, as well as price and functionality.
Note that this definition of economic productivity does not take into
consideration the “cost to the
developer” of producing the product. Whether or not a product can be produced
for a cost less than its value (expected sales), is another important, but different
topic.
The software development team cannot choose to develop less software than
necessary to do the job, build a different application than the customer ordered,
or ignore customer change requests in the pursuit of higher productivity.
Consequently, adjustments must be made for the inherent factors when
comparing productivity results from different projects. While quality at delivery
is a controllable factor, the eventual quality required by the customer is not, so
adjustments also should be made for post delivery repair, too.
Summary
Requirement Changes 10 to 40 8, 9, 10
Diseconomy of Scale 10 to 20 8
Software Reuse 40 to 60 8, 9
However, one has to be aware that not all technical change translates
into MFP growth. An
important distinction concerns the difference between embodied and
disembodied technological change. The former represents advances in the
design and quality of new vintages of capital and intermediate inputs and its
effects are attributed to the respective factor as long as the factor is
remunerated accordingly. Disembodied technical change comes “costless”, for
example in the form of general knowledge, blueprints, network effects or
spillovers from other factors of production including better management and
organisational change. The distinction is important from a viewpoint of analysis
and policy relevance.