You are on page 1of 9

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Measuring Knowledge Management Projects:


Fitting the Mosaic Pieces Together
Alton Y.K. Chua
Division of Information Studies
Nanyang Technological University

Dion Goh
Division of Information Studies
Nanyang Technological University

altonchua@yahoo.com, altonchua@ntu.edu.sg

ashlgoh@ntu.edu.sg

Abstract
This paper seeks to develop an integrated
perspective on knowledge management (KM) project
measurement. Based on a review of the existing
literature, a theoretical framework which identifies
four distinct measurement elements, namely, activities,
knowledge assets, organizational processes and
business benefits is proposed. Using this framework,
an empirical study on KM measurement was
conducted among six Singapore Civil Service
agencies. It was found that most KM projects were
driven top-down, technology-focused and had some
form of milestones specified along their development
stage. Additionally, all four measurement elements
identified in the theoretical framework could be found
across the measurement schemes used in the public
agencies. Two implications can be drawn from the
findings. One, the context under which a KM project is
conceived should be carefully considered when
specifying indicators for measurement. Two, to
balance among the needs of all stakeholders, the use of
all four measurement elements to measure a KM
project is advocated.

1. Introduction
Knowledge Management (KM) projects have
proliferated exponentially across various organizations
around the world as evidenced by the substantial
increase in corporate spending on KM initiatives in
recent years [15]. The rising interest in KM is largely
fuelled by the realization that knowledge is the source
of competitive advantage for any organization to thrive
in the knowledge-based economy.
The benefits yielded from KM projects have been
widely reported. For example, Buckman Laboratories'
KM efforts helped push new product-related revenues

up 10 percentage points; Texas Instruments generated


$1.5 billion in annual free wafer fabrication capacity
by transferring best practices among its existing
fabrication plants [21]. Dow Chemical saved $4
million during the first year of its new KM program
and was expected to generate more than $100 million
in licensing revenues [5]. However, the bases on which
these monetary amounts were derived or approximated
have largely been concealed. The question of how KM
projects can be robustly and comprehensively
evaluated remains to be addressed.
Attempts to measure KM projects have been fraught
with difficulties. For one, the quantification of
benefits reaped from KM projects is a complex
undertaking. In addition, while KM measurement
models are aplenty, the literature on KM measurement
has yet to reach a stage of maturity in which common
elements
of
measurement
and standardized
methodologies have been developed and widely
adopted. The confusion among terms such as KM
project management and intellectual capital
measurement has further made the process of
measuring KM projects arduous.
For these reasons, this paper seeks to develop an
integrated perspective on KM project measurement
based on a review of the existing literature. The aim is
to elucidate the various elements associated with a KM
project that can be measured. An empirical study on
KM project measurement was conducted among public
sector agencies in Singapore. Practitioners who wish to
consider measurement in conjunction with the
implementation of a KM project may take bearing
from the study. A follow-up study from this paper is
also briefly explained to seed some ideas for scholars
who are interested to chart new frontiers in KM
measurement research.

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007
1530-1605/07 $20.00 2007 IEEE

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

2. What are KM projects

3. KM project measurement elements

It is difficult to discuss KM project measurement


without first defining what KM projects are. Hence,
for the purpose of this paper, KM projects refer to any
deliberate interventions intended to enhance the
distinctive capability of the organization through a
systematic approach of leveraging knowledge. Unlike
engineering, IT or construction projects which usually
have a fixed life-span, KM projects may have a fuzzy
beginning and could perpetuate indefinitely. They
range from those largely driven by technology to those
that minimally rely on technology. For example,
Xeroxs Eureka [21], Hughes Software Systems
knowledge repositories [8] and Ford Motors
MyFord.com are KM projects supported extensively
by intranet systems. On the other end of the spectrum,
location-based communities of practice at Eli Lilly
[23] and the After Action Reviews practiced in the U.S
Army [9] are KM projects primarily sustained via faceto-face interactions.
Across organizations, KM
projects attract varying degrees of legitimacy and
leadership support. For example, the KM efforts at
InfoSys Technologies [11] and Ford Motors [20] were
implemented top-down. Electronic Data Systems
adopted a combination of top-down and bottom-up
approaches to implement its KM projects [24] while
Caterpillars Knowledge Network started originally as
a grassroots-led activity [2].

The impetus to implement KM projects is


invariably driven by the alleged promises and business
benefits such as better management of customers
relations, internal process improvement, products and
services integration, cost reduction and profit
generation.
However, given that KM projects
comprise both mechanistic and organic dimensions,
measuring KM projects strictly on the attainment of
their intended objectives is single-faceted and paints an
incomplete picture. Such an effort denies the organic
dimension of the projects. Conversely, measuring KM
projects based solely on intangibles such as staff
morale and organizational climate is equally
misguided. It divorces the projects from business
imperatives and diminishes their relevance to
organizations.
Hence, any reliable KM project
measurement scheme ought to take into consideration
both dimension of the projects. A review of the
literature reveals at least four overlapping elements of
a KM project which can be considered for
measurement. They are the activities generated from
the project, the stock of knowledge assets created or
enhanced, the impact on organizational processes and
finally business benefits realized.

KM projects comprise both mechanistic and organic


dimensions [1]. On the mechanistic dimension, KM
projects are chartered by articulated objectives to
address organizational issues, managed along a
planned schedule and confined within budgetary
limits. On the organic dimension, KM projects entail a
combination of the creation, transfer and reuse of
knowledge, all of which are processes heavily social in
nature because knowledge itself is socially
constructed.
Notwithstanding the extensive
deployment of technology, KM projects educe either
altruism or self-seeking behaviors in people. As a
result, relationships could be forged or estranged.
Also, at times where circumstances and priorities
change, KM projects may meander into new directions.
Hence, not only tangible but intangible and unexpected
outcomes are reaped in lieu of the original intentions.
For example, at Ricoh, a manufacturer of office
automation equipment, its simple online newsletterbased knowledge community evolved over time and
spawned more than 300 business-related communities
throughout the company [33].

3.1. Measuring activities


Most, if not all KM projects centre on some forms
of activities, such as the posting of entries into a
knowledge repository, the retrieval of useful materials
from a portal and attending face-to-face sharing
sessions. Measuring activities generated from the KM
project provides a quantitative indication of the
projects impact on various levels such as individuals,
workgroups, processes and the organization.
The Department of Navy in the US, for example,
developed a few types of metrics to measure activities
in its KM projects [10]. System metrics seek to
approximate the usefulness and responsiveness of the
supporting technology tools. Examples of system
metrics include number of downloads, number of site
accesses, dwell time per page or section and frequency
of use. Output metrics measure initiative-level
characteristics such as the effectiveness of lessons
learned information, user ratings of contributions
posted, number of problems solved and usage
anecdotes where users describe, in quantitative terms,
how the initiative has contributed to business
objectives.

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Along a similar vein, KM measurement efforts at Intel


involve a combination of behavior and outcome
measures [22]. Behavior measures include frequency
of sharing knowledge, satisfaction with sharing
processes and technology, and frequency of being
rewarded for sharing. Outcome measures include time
taken to access needed information, time taken to
locate experts and time taken to solve problems.

3.2. Measuring knowledge assets


Knowledge assets, sometimes also known as
intangible assets or intellectual capital, refer to the
complete set of knowledge embodied in the
organizations systems, processes and people.
Intellectual capital is commonly conceived as a
composite of human capital, structural capital and
customer capital [29]. Intellectual capital measurement
seeks to provide a snap-shot view of the current stock
of knowledge assets by valuating them either in
monetary or non-monetary terms. Increasingly, the
role of knowledge assets an organization possesses is
being acknowledged as a significant contributing
factor to its competitiveness and worth. When IBM
acquired Lotus Development in 1995, it paid $3.5
billion, which was 14 times Lotus's book valuation of
$250 million [6]. The $3.25 billion premium IBM paid
represents its recognition of Lotus intangible assets.
Hence, in measuring a KM project, it is also useful to
account for the knowledge assets created or enhanced.
Some intellectual capital measurement approaches
such as Skandia Navigator, Intellectual Capital-Index,
Technology Broker and Intangible Asset Monitor have
been individually described with their strengths,
weaknesses and operationalizations [3].
Attempts
have also been made to provide an aggregated view of
many of these models by clustering them into four
categories, namely, Direct Intellectual Capital (DIC)
methods, Market Capitalization (MC) methods, Return
on Assets (ROA) methods and Scorecard (SC)
methods [30]. DIC methods attempt to identify the
various components of intangible assets and offer a
dollar valuation for each. An example of a DIC
method is technology broker [4]. MC methods assume
that the value of intellectual capital is the difference
between an organization's market value and the book
value of its net assets. An example of a MC method is
Tobin's q, where 'q' is the ratio of the market value of
the firm to the cost of replacing its assets. ROA
methods seek to estimate the value of an organizations
intangible asset on the basis of the organizations
average profits, average tangible assets and the
industrys average ROA over a fixed period of time.

Knowledge Capital Earnings [19] is an example of the


ROA method. SC methods are similar to DIC methods
except that the intangible assets are not estimated in
monetary terms. It must be noted that while all four
methods are firm-level measures, the SC method have
been commonly used as a project-level measure. For
example, RWE Thames Water, one of Europes largest
multi-utilities, uses the Balanced Scorecard approach,
which is an SC method [16], to monitor and measure
the progress of its KM project [13].

3.3. Measuring impact on organizational


processes
The third measurement element of a KM project is the
impact to which the KM project exerts on
organizational processes. Organizational processes
refer broadly to areas such as leadership effectiveness,
technology utilization and culture. Tying a KM
project to its impact on organizational processes
reinforces the strategic importance of the project to the
organization, particularly if significant achievement
has been achieved. The magnitude of impact is
perceived to be a reflection of the developmental status
of the KM project. Thus, instruments that measure
impact on organizational processes are often
administered more than once across a defined time
interval. They also form the foundation for many KM
maturity models.
As a preliminary attempt to gauge the status of an
existing or an impending KM project, an instrument
has been developed to measure four broad
organizational
processes:
(1)
organizational
environment, (2) technical and managerial support, (3)
strategy and goals and (4) utilization of knowledge and
technology [14]. Each process is further broken down
into key areas for finer examination. For example, the
organizational environment encompasses social,
culture, incentives and trust while technical and
managerial support include areas such as
organizational structure, awareness and commitment.
Each area could attract responses on a Likert scale.
Thus, the instrument allows the effect of a KM project
on various aspects of organizational progress to be
examined.
The American Productivity and Quality Center has
developed the Knowledge Management Assessment
Tool (KMAT) that examines four areas, namely,
leadership, technology, culture, and measurement.
Organizations are able to use KMAT to analyze how
well they perform in these areas vis--vis other

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

organizations which have been scored against KMAT


[12]. With a time-series design, KMAT could shed
light on the specific organizational progress achieved
through a KM project.
Kochikar proposes a model which defines five
progressive levels of how knowledge is being
leveraged within the organization [18]. First is the
Default Level where knowledge is fragmented in
isolated pockets in the organization. Second is the
Reactive Level where knowledge is shared purely on
a need basis. Next is the Aware Level which marks
the beginning of an integrated approach to managing
knowledge in the organization.
Fourth is the
Convinced
Level
where
organization-wide
knowledge-sharing systems have been put in place.
Finally, at the Sharing Level, the culture of sharing
becomes institutionalized and the organization has the
capacity to leverage knowledge for business
advantage. KM project success is thus measured by an
upward movement along these levels.

3.4. Measuring business objectives


The final measurement element involves the actual
business objectives a KM project brings to the
organization. Business objectives, which comprise
both monetary and non-monetary ones, include profit
generated, cost savings, reduction in cycle-time,
improvement in customers satisfaction.
They
represent the ultimate goals for which the KM project
was originally intended to serve. In the case of
Samsung Advanced Institute of Technology, an R&D
arm of Samsungs Group, for example, its business
objectives are not oriented towards profit-and-loss but
on technological innovations [27]. Hence, the business
performance is measured in non-financial terms such
as of the quantity and quality of patents developed and
product commercialization potential.
However, due to the usually fuzzy nature of KM
projects and the presence of extraneous factors such as
external business conditions and internal leadership
transition, establishing a convincing association
between the KM project and the business outcomes
remains a challenge. One approach used to determine
the business benefits of a KM project is to collect
systematic anecdotal evidence [32]. At Aventis
Pharmaceuticals, anecdotes were elicited from
individuals through a series of interviews questions
including what was the impact of the various KM
activities?, what would have/have not happened
without these activities?, how much time was saved?
and how certain are you that these activities were

responsible for the benefit added? [26]. For example,


once the amount of time saved in drug development
has been ascertained, it was expressed in monetary
terms by approximating the additional revenue (AR)
generated due to the reduction in time-to-market.
Thereafter, the probability (p%) of the time saved due
to the KM project was estimated. The actual business
benefits reaped from the KM project, expressed in cost
savings, is computed by multiplying AR by p%.

4. Lagging and leading measurement


elements
Indicators used to measure a KM project could
either be lagging or leading [7]. Lag indicators are
tangible outcomes reaped from the KM project. The
following list of suggested indicators to measure the
success of a KM project represents lag indicators [5]:
growth in the resources allocated to the project, such as
people and finance; growth in the volume of
knowledge content and usage; expansion of scope the
project from an individual level to an organizational
level; and some evidence of financial return generated
from the KM project. Other examples of lag indicators
which are tied more directly to business objectives
include cost avoidance, time-to-market, additional
profit generated and customers satisfaction.
Lead indicators, on the other hand, are underlying
performance drivers that are precursors to other
activities or outcomes. They are predictive and
forward-looking in time-orientation. Examples of lead
indicators include staff competency, knowledge
contribution, level of trust and interaction among staff.
With reference to the four KM project measurement
elements presented earlier, activities, organizational
processes and business outcomes are lagging indicators
while knowledge assets are leading indicators. Table 1
shows the summary of the four KM project
measurement elements described.
Table 1: Summary of KM project
measurement elements
Measurement
elements
Activities

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Description
Activities,
expressed in
quantitative
terms, which
are generated
from the KM
project.

Indicators
Types
Lagging

Examples
Number
of
downloads,
number of site
accesses, dwell
time per page
[10].

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Knowledge
Assets

Organizational
processes

Business
outcomes

The complete
set
of
knowledge
held in an
organization
as embodied
in
the
organizations
culture,
systems,
processes and
people.
The impact on
organizational
processes
brought about
by the KM
project.

Leading

The ultimate
goal for which
the
KM
project
is
intended
to
serve.

Lagging

Lagging

Human capital,
social capital,
customer
capital [29];
Market assets,
human-centered
assets,
intellectual
property assets
and
infrastructure
assets [4].
Organizational
environment,
technological
and
management
support,
strategy
and
goals,
and
utilization
of
knowledge and
technology
[14].
Technological
innovations
[27];
timesavings [26]

5. KM at the Civil Service


The intense interest in KM among civil service
organizations in Singapore can be traced back to 1995
when the government mooted and led a movement
named Public Service for the 21st Century or
PS21 in short. The objectives of the movement were
to nurture an attitude of service excellence in meeting
the needs of the public with high standards of quality,
courtesy and responsiveness and to foster an
environment which induces and welcomes continuous
changes for greater efficiency and effectiveness by
employing modern management tools and techniques
while paying attention to the morale and welfare of
public officers [25]. One of the main thrusts of PS21
prescribed broad directions of using technology and
KM to build capabilities and capacities.
To encourage the experimentation of KM projects
among public agencies with a view for subsequent
larger-scale KM deployment, the Knowledge
Management Experimentation Program (KMEP) was
conceptualized in July 2001. Through the KMEP,
funding is available for participating public agencies
that intend to implement KM projects. Additionally, a
KM consulting unit was set up to provide a range of
KM-related training services and consultancy services

to the public sector [17]. Such directed efforts from


the government have resulted in a number of sustained
KM projects across the public agencies.

6. Methodology
Response to KMEP was encouraging and various
KM projects were initiated soon after. However,
information about the evaluation of these projects or
benefits yielded by them has not been reported in the
KM literature. Our study is therefore timely and seeks
to uncover the measurement elements used in these
projects.
Fifteen public agencies were invited to participate
in this study which spanned from June 2005 to
December 2005. These agencies represented a broad
spectrum of services offered to the public and included
those in housing, law, finance, social security,
education, defense and homeland security. These
agencies were either awarded the prestigious
Singapore Quality Award [28] in recognition of their
investment in KM infrastructure and resources or had
been registered as a participating agency in the KMEP.
Of the 15, only six agreed to disclose details about
their KM projects.
Data was collected through a multi-case study
approach. The case-study method was used because of
the exploratory nature of this study, the small sample
size involved as well as the need to capture contextual
details. In each agency, a senior staff who held the
designation of Director or above was asked to identify
the most significant KM project undertaken, as well as
to nominate staff who were involved with that project.
On average, two staff from each agency were
contacted and interviewed. The semi-structured
interviews sought to solicit the following details: 1) a
description of the KM project, including its rationale,
key champions, focus and objectives, 2) the current
status of the project and 3) the ways in which the
project has been measured. The responses were
triangulated against archival records such as websites,
press releases and other in-house publications given by
the agencies. Only consistent data were admitted and
analyzed.

7. Findings
Three main findings emerged from the study of the
six public service agencies that agreed to participate.
First, only in one agency was the KM project
conceived and driven from the bottom-up. The rest

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

were championed either from the top or by a steering


committee appointed by the senior management. A
KM champion from Agency A revealed:
Our KM project is strategically aligned to the
long-term aim of transforming the organization into an
adaptive knowledge enterprise, where information and
knowledge is well utilized to increase business
efficiency and effectiveness.
An officer from Agency B elaborated:
The KM project was launched with a big fanfare.
Since then, people from the top are watching this KM
project very closely. So everyone is kept on their toes,
particularly in meeting the targets specified by the
KPIs (key performance indicators).
Some key
performance indicators mentioned included the ratio of
customers complaints to compliments and percentage
of service quality fulfilled.

was because they had shown to deliver the intended


results and generated significant optimism among the
management. For example, in one of these projects, the
KM system proved to be efficient in helping staff make
routine statutory submissions. An officer from Agency
D shared the benefits of the KM project:
In the past, we used to spend days trying to pull
data from so many sources. but it is only a matter of
a few clicks to put together the report to be submitted
to the Ministry.
Third, in the measurement scheme adopted across
the agencies, all four elements of KM measurement,
namely, activities, knowledge assets, organizational
processes and business outcomes could be found, as
shown in Table 2. Furthermore, almost all the
indicators cited by the agencies were quantitative in
nature.
Table 2: KM project measurement
elements in the Singapore Civil Service

Moreover, in all but one agency, the KM projects


had a heavy technology focus. These projects took the
form of intranet, workflow management and content
management systems. The only project that relied
minimally on technology involved After Action
Reviews conducted via face-to-face meetings. An
officer from this agency added:

Measurement
elements
Activities

Basically, through the KM project, we would like


to nurture a knowledge-sharing mindset among the
staff. Hopefully, we could learn from each others
mistakes and reduce reinvention of the wheel.

Knowledge
Assets

Next, most KM projects had some form of


milestones and deliverables specified along their
development stages. For example, in one KM project,
one of the early goals was to achieve a certain
percentage of re-visits by registered users. As the
project progressed, the goals were broadened to
encompass the building of a virtual learning
community among different stakeholders and the
inculcation of right values to a targeted user group.
Furthermore, all KM projects were reportedly
progressing as planned. A Strategic Planning Manager
from Agency C explained:
So far, the KM project has proved to be very
useful in providing inputs for strategic planning and
budgeting cycles. It has also helped streamline several
policy-making processes.
Additionally, the KM projects in two agencies were
expected to be expanded in scope and resources. This

Organizational
processes
Business
outcomes

Sample indicators
Number of KM events organized, number of
online discussion threads, hit rates of websites,
number of hours spent on KM activities,
number of online contributors, number and
growth of documents published in intranet or
to the knowledge bases/portals, number of
employees championing KM
Balanced Scorecard, Singapore Quality Award
framework
Change readiness of staff, managerial support,
strategies and goals, propensity for knowledge
sharing, availability of technology, use of
technology
Savings from IT-based KM tools, Economic
Value Added savings, ratio of customers
complaints to compliments, percentage of
service quality fulfilled, percentage of public
suggestions implemented

In pursuing a formal recognition of organizational


excellence, some public agencies aspired to obtain the
Singapore Quality Award (SQA). The award is built
on a framework that specifies seven assessment
categories: visionary leadership, customer-driven
quality, innovation focus, organizational and personal
learning, valuing people and partners, agility,
knowledge-driven system, societal responsibility,
results orientation, and systems perspective [28].
Incidentally, four agencies found the SQA framework
relevant to their KM projects and had used it to
measure their knowledge assets. This could be

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

attributed the fact that the SQA not only highlights the
various aspects of running an agency succinctly but
also uses a scoring system that allows an agency to be
internally and externally benchmarked.

8. Discussion and conclusion


On the basis of the findings, it can be observed that
KM measurement is not an isolated notion but
intricately tied to the nature of the KM projects, which
in turn are situated in the larger contexts of why and
how the projects were conceived in the first place. In
addition, the choice of indicators as well as the
benchmarks prescribed in a KM measurement scheme
have the potential to engender certain behaviors from
staff whether intended or otherwise. In the Singapore
Civil Service, KM was seen as part of an IT strategy
through which the objectives laid down by PS21 could
be achieved. Given PS21s high visibility, it was little
wonder why participation from the senior management
in leading KM efforts in most of the agencies was
strong.
Moreover, an IT-centric approach to
implementing KM projects meant a distinctive
emphasis on the deployment of technology solutions.
Thus, KM projects which were driven top-down and
IT-centric were invariably found to possess three
characteristics. One, they were closely monitored by
the senior management. Two, they were tightly
aligned to the strategic goals of the agencies, and three,
they were almost always measured in quantitative
terms. While these characteristics are not inherently
bad, they may potentially foster a narrowly-focused
mindset to achieve the given targets but discourage any
further activities or initiatives that go beyond what is
not currently measured [31]. On the contrary, the only
KM project driven from bottom-up did not attract
constant attention from the management.
The
measurement scheme tended to be evolutionary and the
metrics chosen were more qualitative. In particular,
anecdotes in the form of success stories and
experiences were used as evidence of the impact of the
KM project.
Another issue that emerged from the study concerns
the characteristics of measurement indicators. The
indicators found under activities, knowledge assets and
organizational processes appeared to be rather generic,
and can be readily adopted by any KM project in any
industry. Indicators under the business outcome
however, were context-specific. In this study, these
indicators reflect the non-profit nature of government
agencies such as savings, ratio of customers

complaints to compliments and percentage of public


suggestions implemented.
In summary, several implications can be drawn
from the findings. First is the importance of context.
The context under which a KM project is conceived
should be carefully considered when specifying
indicators under each KM measurement element.
Ultimately, the success of a KM project is measured by
the indicators used, and the selection of inappropriate
indicators may jeopardize a project even before it has
begun. Based on the empirical data, it appears that
management-driven initiatives favor quantitative
indicators. Perhaps the use of such indicators facilitates
resource allocation since hard numbers lend
themselves more easily acceptable as objective
quantification of the KM projects benefits. On the
other hand, organically-driven projects were found to
use more qualitative indicators, possibly reflecting the
fact that success in this context is measured more by
intrinsic benefits to the individuals involved.
Next, KM practitioners ought to be aware of the
possible range of indicators available, as KM projects
are multi-faceted and involve many stakeholders. Even
though they could be collectively bound by a common,
over-arching
organizational
priority,
different
stakeholders may harbor their own interests and
agendas. Thus, to adequately serve the needs of all the
stakeholders and to paint as complete a picture of the
status of a KM project as possible, each of the four
aspects of the KM project as presented earlier in Table
1 has to be collectively considered for measurement.
Activities associated with a KM project ought to be
measured to reflect the actual work done at the ground.
Knowledge assets ought to be measured to ascertain
the KM projects impact on the intellectual capital of
the organization. Organizational processes ought to be
measured to assess the influence of the KM project on
the climate and other deep-seated organizational
routines. Business outcomes ought to be measured to
justify the objectives for which the KM project was
originally intended. In this way, the measurement
scheme developed would encompass both leading and
lagging indicators. Furthermore, where possible,
indicators of both quantitative and qualitative nature
could be represented.
Put succinctly, the goal of KM project measurement
is to ascertain the benefits of a given project. This
paper has advocated the use of four measurement
elements as a basis to provide a comprehensive
evaluation of a KM project. Nevertheless, there is
work to be done in conflating these measures into an

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

integrated measurement model. As part of ongoing


work, we are developing an empirical framework to
evaluate the performance of KM projects using the
four KM measurement elements. A crucial aspect of
this framework is the classification of KM indicators
into generic ones which can be used in any KM
project, and specific indicators which have to be
crafted to individual organizations business outcomes.
It is hoped that through our present initial work, more
KM scholars can be encouraged to contribute their
ideas in unraveling the complexity of measuring KM
projects.

10. References
[1] Ahmed, P.K, Lim K.K. and Loh A.Y.E. (2002).
Learning through Knowledge Management,
Butterworth-Heinemann, Boston.
[2] Ardichvili, A., Page, V. and Wentling, T. (2003).
Motivation and barriers to participation in virtual
knowledge-sharing communities of practice,
Journal of Knowledge Management 7(1): 64-77.
[3] Bontis, N. (2001). Assessing knowledge assets: a
review of models used to measure intellectual
capital, International Journal of Management
Reviews 3(1): 41- 60.
[4] Brooking, A. (1996). Intellectual Capital: Core
Assets for the Third Millennium Enterprise,
Thomson Business Press, London, United
Kingdom.
[5]Davenport, T. H., De Long, D.W., Beers, M.C.
(1998). Successful Knowledge Management
Projects, Sloan Management Review 39(2): 43
57.
[6] Davenport, T.H., Prusak, L. (1998). Working
Knowledge: How Organizations Manage What
They Know, Harvard Business School Press,
Boston.
[7] del-Rey-Chamorro, F.M., Roy, R., Van Wegen, B.,
Steele, A. (2003). A framework to create key
performance indicators for knowledge management
solutions, Journal of Knowledge Management 7(2):
46 62.
[8] De, A., and Sathyavgeeswaran, R.(2003). KM at
Hughes
Software
Systems:
certification,
collaboration, metrics
in Leading with
knowledge M. Rao (ed), Tata-McGraw-Hill, New
Delhi.
[9] Dixon, N.M. (2000). Common Knowledge,
Harvard Business School Press, Boston, MA.
[10] DON (2001). Metrics guide for Knowledge
Management
Initiatives

http://www.km.gov/documents/DoN_KM_Metrics_
Guide_Aug_01.pdf retrieved 1 Oct 2002.
[11] Garud, R., and Kumaraswamy, A., (2005),
Vicious And Virtuous Circles In The Management
Of Knowledge: The Case Of Infosys Technologies.
MIS Quarterly 29(1): 9 33.
[12] Hiebeler, Robert J, (1996), Benchmarking:
Knowledge management, Strategy & Leadership
24(2): 22 29.
[13] Hemmings, P. & Potter, R. (2004). Proving
business value with a Balanced Scorecard. KM
Review 7(3): 11.
[14] Iftikhar, Z., Eriksson, I. and Dickson, G., (2003)
Developing an Instrument for Knowledge
Management Project Evaluation, Electronic Journal
of Knowledge Management 1(1) paper 7, retrieved
3
Oct
2004
http://www.ejkm.com/volume1/volume1-issue1/issue1-art7.htm.
[15] Ithia, A. (2003). UK lawyers spend more on KM,
KM Review 5(6): 11.
[16] Kaplan, R.S. and Norton, D.P. (1992). The
balanced
scorecard
measures
that
drive
performance. Harvard Business Review 70(1): 7178.
[17] Knowledge Management Consulting Unit (2003).
Civil Service College Publication.
[18] Kochikar, V.P., (2000). The knowledge
management maturity model: a staged framework
for leveraging knowledge, KM World 2000, Santa
Clara, CA, 12 15 September 2000.
[19] Lev, B. (1999). Seeing is Believening - A Better
Approach To Estimating Knowledge Capital in
CFO magazine April 2000. retrieved 1 Oct 2002
http://207.87.9.12/html/charts/99FEseei-2.html.
[20] McDermott, R. and O'Dell, C. (2001).
Overcoming cultural barriers to sharing knowledge,
Journal of Knowledge Management 5(1): 76-85.
[21] O'Dell, C., and Grayson, J. (1998). If only we
knew what we know: Identification and transfer of
internal best practices, California Management
Review 40(3): 154 174.
[22] Parise, S., Wolfe, J., Wilson S., Abrams, L.
(2004). Building the business case for a knowledge
initiative, IBM Institute for Business Value study,
published 5 Jan. retrieved 7 July 2005 http://www1.ibm.com/services/de/bcs/pdf/strat/entw-businesscase.pdf.
[23] Plaskoff, J. (2003). Creating a community culture
at Eli Lilly, KM Review 5(6): 16 19.
[24] Prasad, B, and Nadessin, J. (2003). Continuous
knowledge-based Innovation at EDS, in Leading
with Knowledge, M. Rao (ed), Tata-McGraw-Hill,
New Delhi.

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

[25] PS21. (2005). PS21: PS21 Framework.


http://www.ps21.gov.sg.
[26] Rush, D. (2002). Measuring connectivity at
Aventis pharmaceuticals. KM Review 5(2):10 13.
[27] Sohn, J.H.D. (2004) Measuring KM contribution
to R&D at Samsung, KM Review 7(2): 6 7.
[28] SPRING (2006). Singapore Quality Award for
Business Excellence. retrieved 3 Jun 2006
http://www.spring.gov.sg/portal/products/awards/sq
a/sqa.html.
[29] Stewart, T. A. (1997). Intellectual Capital: The
New Wealth Of Organizations Nicholas Brealey,
London.
[30] Sveiby, K.E. (2004). Methods for Measuring
Intangible Assets, retrieved on 6 Jan 2005 at

http://www.sveiby.com/articles/IntangibleMethods.htm
[31] Voelpel, S.C., Leibold, M., Eckhoff, R.A.,
Davenport, T.H. (2004). The tyranny of the
Balanced Scorecard in the innovation economy,
Journal of Intellectual Capital 7(1): 43-60.
[32] Wenger, E. C., McDermott, R., and Snyder, W.
M. (2002). Cultivating Communities of Practice,
Harvard Business School Press, Boston.
[33] Yamazaki, H. (2004). East meets west in Japanese
communities, KM Review, 7(2): 24 -27.

Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)
0-7695-2755-8/07 $20.00 2007

You might also like