Professional Documents
Culture Documents
Article
1. Introduction
Section:
Choose
The benefits of performance measurement have been studied a lot (e.g. Franco-Santos et al., 2012) but there is a
lack of understanding of why the full potential of measurement is rarely exploited in practice (Bourne et al., 2005).
While there are clearly also problems in technical measurement aspects (e.g. validity of measures) (Tung et al.,
2011), it has been noted that problems often occur in the use of performance measurement (Bourne et al.,
2005; Stivers et al., 1998). Concurrently, the maturity of utilizing performance measurement has been linked to an
organizations capabilities and performance-driven behavior (Bititci et al., 2011;Bourne et al., 2005; Ittner et al., 2003)
laying ground for the need to gain more in-depth understanding on the status of using measurement in different
purposes.
There are many studies on the status of performance measurement in different contexts. These studies (e.g. Lts et
al., 2011;Tangen, 2005) often focus on the objects of measurement (e.g. the balance of measures used) or the
technical aspects (i.e. the types of measures or measurement frameworks used) instead of utilizing performance
information (PI) in management. There are also studies assessing the maturity of performance measurement and
management in organizations (e.g. Aho, 2012; Bititci et al., 2012;Cocca and Alberti, 2010; Van Aken et al.,
2005; Wettstein and Kueng, 2002). Various models and tools have been proposed both for managerial and academic
purposes. They deepen the understanding of current measurement practices and can be utilized in pinpointing
common development areas. However, the work with maturity assessment is still in progress. For example, the
practical application of the models is often limited (Marx et al., 2012).
This paper designs and tests a model for profiling the maturity of performance management (PM) in organizations.
The study follows the design science approach. First, existing maturity models are reviewed, their shortcomings are
identified and a new classification of critical variables extracted from the literature is presented. Second,
measurement scales are defined for variables identified. Third, a maturity analysis tool is applied as a survey
receiving 271 responses. The tool is evaluated by means of qualitative open-ended responses and statistical
analysis. Analysis of variance is used to review maturity profiles based on the empirical material gathered.
2. Methodology
Section:
Choose
This paper applies the design science approach, which aims to develop knowledge that can be used in solving
problems in the field (Andriessen, 2004; Van Aken, 2007). In design science, the effectiveness and validity of the
solutions are evaluated not only by the researchers but also by the users in the field of application. As the ultimate
aim for this research is to construct a model to profile organizations according to their PM and to give guidelines for
future development, design science was a natural way to pursue this aim.
There are detailed instructions for constructing a maturity model with a design science approach (e.g. de Bruin et al.,
2005; Maier et al., 2012). The six main phases by de Bruin et al. (2005) are presented in Figure 1. This study follows
the first five of these steps. Maintain phase deals more with the usage of a model which is not in the scope of this
paper.
In the first phase, the scope and target population of the model are defined. In this study, the distinctive scope of the
new model was linked to the usage of performance measurement. A further aim was to develop a model which can
be deployed in many different organizations without specified limitations.
The second phase includes the definition of evaluation variables and execution plans. Evaluation variables can be
identified both analytically (top-down) or by combining the existing literature (bottom-up). De Bruin et
al. (2005) and Maier et al. (2012) agree that bottom-up approach works well when variables thought to represent
maturity are known. As there already is a decent literature on the topic, the bottom-up way was chosen in this study
and the evaluation variables were first identified from the appropriate literature. The evaluation variables identified
were classified into two main perspectives: performance measurement systems (PMS); and the usage of PI. This
work is described in more detail in Section 4.1. It was decided to carry out the actual execution of the maturity
analysis as a self-evaluation survey addressed to managers and administrative experts assumed to have sufficient
knowledge about existing PM practices.
In the third phase, the main content of the model is defined. This meant in this study the development of an
appropriate survey instrument with maturity levels describing various ways of operating in each of the evaluated
variables. This phase is described in more detail in Section 4.2. In the fourth phase the model goes to first trial
testing. In our study, a web-based survey solution was designed. It was further developed through three different
tests including researchers and practitioners. This phase is reported in more detail in Section 4.3.
The final phase of this study was the deploy phase, in which the model was tested with the target population. In this
phase, this study differed from the de Bruin et al. (2005) process, since the gathered empirical material was utilized in
the further definition of maturity profiles and hence the content of the model. This task required the use of survey
instead of interviews in order to improve external validity of the profiles. The model was evaluated for its validity and
reliability to represent measurement rigor (Emory, 1985, p. 94) and practicality and relevance to represent managerial
relevance (Matta, 1989, p. 67). This phase is described in Sections 5.1-5.4.
3. Literature review
Section:
Choose
Considering implementation of the models, on the one hand there are academic models with more aspects, often
leading into a large set of survey questions (Aho, 2012; Tung et al., 2011). In contrast there are models with a
consultative background (Balanced Scorecard Institute, 2010; Brudan, 2009; Dresner, 2010) that typically present a
concise list of assessment factors (up to ten) which are then evaluated by using grades with written descriptions
fitting into one A4 page.
The scope of the models likewise varies a great deal. There are models intended to assess the maturity of
performance measurement (Cocca and Alberti, 2010; Van Aken et al., 2005; Wettstein and Kueng, 2002). They
discuss relevant aspects in performance measurement (such as the content of measurement and the measurement
process). Then there are models that are more focussed on assessing PM as a part of management (Aho,
2012; Bititci et al., 2012). These models investigate aspects such as performance-oriented culture, strategic
orientation and performance reporting.
Despite the growing number of studies, the work on the maturity assessment of PM is ongoing. The number of actual
maturity models (containing maturity model framework, maturity model assessment instrument and maturity profiles)
is limited and none of these models are established or overarching. Three main research gaps can be identified.
First, many models concentrate on the design of PMS, paying less attention to measurement usage (Bititci et al.,
2012). Issues related to measurement usage are often limited to specific aspects such as business intelligence,
analytical capability, frequency of reporting, communication of measurement results, and reviewing measures rather
than actual managerial processes and tasks (Aho, 2012; Wettstein and Kueng, 2002).
Second, many models do not provide overall maturity profiles based on an empirical data set (Marx et al.,
2012; Wettstein and Kueng, 2002). This appears to be caused by the limited empirical testing of the models. In their
review, Marx et al. (2012) noted that only two models had explicitly been tested. Many models have been applied as
interactive audits in single cases (Bititci et al., 2012) while more extensive empirical data is often lacking (Tung et al.,
2011).
Third, many audit instruments are complex and time consuming to use (Cocca and Alberti, 2010). Extensive surveys
(e.g. Tung et al., 2011) may be theoretically valid but they can be difficult to apply in management practice.
However, Marx et al. (2012) noted the growing number of consultancy originating models in the area. They argue that
such models lack theoretical foundation and are derived from an arbitrary design method. Hence it appears that the
right balance between academic rigor and practical relevance needs to be found.
Addressing these research gaps identified is the underlying motivation of this study. The first gap is tackled in Section
4 summing up the existing maturity models while the other two are under investigation in Section 5, which is the
empirical part of this study.
4. Defining the design and content of the model
Section:
Choose
Whereas there is a rather established knowledge around PMS, literature on PI usage is more fragmented. There is
no established definition or model for PM which is difficult to grasp on (Tangen, 2005). Existing studies investigate
issues, such as tasks and processes of PM (Ferreira and Otley, 2009). Another perspective on using performance
measurement relates to the established research of management control (Otley and Berry, 1980) where the interest
is often in different controls that are used to direct the behavior of employees (Malmi and Brown, 2008).
Two main viewpoints for PI usage were chosen. First, factors facilitating the successful usage of performance
measurement (see e.g.Bourne et al., 2002) were gathered from the perspective of communication and commitment.
Second viewpoint was managerial tasks where managers use performance measurement information. For the sake
of practicality it was decided to leave out the reasons why managers use performance measurement information,
which also helped to prioritize the variables in the prior literature. Since there were many possible alternatives to
assess managerial tasks and functions where performance measurement information is used, focus was on two
perspectives: use of measurement information in planning (ex-ante perspective) and in leadership and management
(ex-post perspective). Table I illustrates the main perspectives chosen and critical variables which are discussed next
in more detail.
It is notable that IS have been rarely incorporated in the studies of performance measurement despite their
acknowledged significance (Marchand and Raymond, 2008). Three critical variables were identified to measure the
ability of IS to support the automating of measurement systems (cf. Marr and Neely, 2001). Existing maturity models
capture well the recommendations of PMS practices (e.g. Neely et al., 1997). Six variables were chosen to capture
the essentials of these. Hence, total of nine variables were identified representing the PMS aspect.
Two essential factors have often been found to facilitate performance measurement implementation: communication
and commitment. These are also frequently mentioned in the change management literature. Commitment of
personnel and management has typically been emphasized when discussing successful implementation of PMS
(Kennerley and Neely, 2002). Communication of performance results reflects the aspects of information flows
(Ferreira and Otley, 2009; Jskelinen and Sillanp, 2013) known to facilitate the usage of performance
measurement. The perspective of communication and commitment was captured with four critical variables.
In performance measurement literature the importance of connection between measures and strategy has often been
highlighted (e.g. Cocca and Alberti, 2010; Tangen, 2005; Neely et al., 2000). Much of the existing research relates to
the integration of performance measurement and strategy. However, the role of performance measurement in
strategic planning has been considered less, even though it has been found to characterize modern strategic
planning (Tapinos et al., 2005) and enable double-loop learning (Argyris and Schn, 1978). Planning was decided to
be examined in two aspects: strategic long-range planning and action planning (cf. Malmi and Brown, 2008). Three
variables were identified to represent the strategy and planning perspective.
The final perspective was linked to leadership and management. Since the broad nature of this viewpoint, balanced
PM approach was applied. As balanced scorecard is one of the few established models in the performance
measurement literature, its perspectives were utilized as a starting point for examining the issue. These four
viewpoints were further analyzed and only one factor roughly representing each of the four balanced perspectives
was chosen: resource allocation (financial management), benchmarking (process management), scanning external
environment (customer) and competence management (learning and growth). Finally, one variable related to
rewarding was also incorporated. Rewarding is often regarded as an integral part of PM models (Ferreira and Otley,
2009).
4.2. Constructing the maturity levels for survey tool
In the construction of the survey tool a balance between extensive survey instruments and the pragmatic maturity
analysis was sought. Four-step maturity levels for each critical variable were designed which are thought to represent
the sophistication level in each critical variable. Maturity levels mean written descriptions for a chosen number of
evaluation levels (Table II). Literature of best practices of performance measurement and management (e.g. Bititci et
al., 1997; Neely et al., 2000; Najmi et al., 2005) as well as consultancy originated maturity models (Balanced
Scorecard Institute, 2010; Brudan, 2009; Dresner, 2010) were utilized in defining the maturity levels. As suggested
by Maier et al. (2012), best and worst practices were determined first. Authors own intuition was also needed in
defining the Levels 2 and 3.
Written assessment criteria are a way to stand out from many existing maturity surveys using Likert scales. Written
criteria have their own downsides relating for example to the definition of assessment levels that are clearly different
from each other. However, at least three factors justify the written maturity levels:
1.
while Likert scales are highly subjective and often difficult to answer, written maturity levels provide clearer
and more objective alternatives for the respondent (Cocca and Alberti, 2010);
2.
written maturity levels provide best practices for the respondent and facilitate the identification of
development ideas already when completing the survey as well as generating discussion and raising
awareness (Maier et al., 2006); and
3.
written maturity levels enable responding to the survey without external consultants and knowledge of
practices outside own organization (Garengo et al., 2005).
was compared to the theoretical maximum. Satisfaction was elicited separately for both PMS and PI usage and was
ranked from 1 to 4 in both questions (1=extremely dissatisfied, 4=extremely satisfied). Variances between defined
maturity profiles were investigated with analysis of variance (ANOVA), which is widely used in behavioral sciences
and can be used to identify the statistical significance between the averages of different groups (Bryman and Bell,
2010).
5.2. Creation of maturity profiles
The sophistication and scope evaluated with PM score and personnels satisfaction with prevailing practices were
combined. Satisfaction rate complements PM score by taking into account different kinds of needs organizations
have. Without a satisfaction rate a misleading assumption that more mature practices and technical solutions are
always better might easily have been made. Considering the cost-effectiveness of PM this might not always be the
case (Ittner et al., 2003). Therefore the maturity profiles were created using two perspectives: PM score and
satisfaction rates. This also means that unlike in traditional maturity models, profiles created for this model cannot be
ranked in order. They are built to profile, not to rank, organizations according to their PM maturity.
In order to create different maturity profiles, PM scores of organizations were divided into four different groups.
Average PM score was calculated as an average of all the different respondents scores and it was 50.2 percent of
maximum points. As PM score distribution closely follows normal distribution, standard deviation was calculated to
allow more precise positioning of organizations. Second perspective in profile creation was personnel satisfaction.
Four different satisfaction groups ranging from extremely dissatisfied to extremely satisfied were formed by combining
the two satisfaction questions (both rated from 1 to 4). A summary of score thresholds for different tiers in the PM
score and satisfaction rate are presented in Table IV.
With the help of different satisfaction and PM score groups a 44 matrix was constructed to create maturity profiles.
Although segmentation to different tiers would have allowed the creation of 16 different maturity profiles, a decision to
combine groups was made in order to maintain relevance and comprehensibility for practitioners. Combination was
made by dividing respondents into satisfied and dissatisfied respondents and separating these groups to high and
low PM score groups (below and above average). Thus four different maturity profiles were formed. These profiles
and number of respondents are shown in Figure 2.
5.3. Testing of the model
Testing of the model was done with two different aspects as recommended in design science: rigor and relevance
(Andriessen, 2004). Academic rigor was tested with reliability and validity and managerial relevance was tested with
practicality and relevance. Reliability was tested by calculating Cronbachs for each of the perspectives chosen for
the model to ensure internal consistency. All the s rise above the minimum requirement of 0.5 and more than half
exceed the limit of 0.7, which is generally considered good for model testing. All results from this test are shown
in Table V.
Validity of model was estimated with qualitative data. In all, 63 respondents (23 percent) commented on the survey
instrument itself in an open-ended question. In these comments the scope of the model was appreciated. Examples
of these comments were Workable survey! All the right things were asked. and Gives a good overview and is rather
extensive and many-sided. In the researchers subjective interpretation, most of the comments (over 50 percent)
were clearly positive. Of the rest of the comments, 25 percent were slightly negatively oriented and about 25 percent
were neutral in nature. However, some of the comments noted only relevance/practicality aspects and for this reason
the measure of validity is not without its flaws.
Relevance was measured with an answering percentage for personal e-mails which was 22 percent. This answering
percentage is comparable with earlier studies (e.g. Marx et al., 2012) and can be considered good in current times.
Practicality was tested with the median answering time which was 11 minutes. This is close to the initial target of 15
minutes, what was deemed as suitable for a quick but yet comprehensive analysis. One organization even
implemented this model to measure their PM development on a regular basis. Summary from all of these tests is
presented in Table VI.
A large share of the respondents, 50 percent, also answered to the open-ended questions about the reasons for
satisfaction/dissatisfaction with performance measurement and measurement information use. This high percentage
reveals the reflections this model can create. In many comments it was mentioned that just completing the survey
gave many new ideas for PM development. Based on all of the assessments in this section the presented model can
be seen to meet its target to be both academically rigor and managerially relevant.
5.4. Empirical results: variables creating personnel satisfaction in different profiles
After dividing the organizations into four different profiles, analysis of variance was carried out between suitable
profiles to find out the differences in critical variables creating personnel satisfaction. Differences between Profiles 1
and 3 and 2 and 4 were analyzed, because between those profiles personnel satisfaction rate change but the overall
PM score stays at the same level. Between other maturity profiles PM score changes and for this reason there can be
significant differences in every critical variable. It should be noticed that these maturity profiles are not sequential (i.e.
an organization does not have to progress from profile one trough four in order). The results of the analysis of
variance are shown in Table VII. Numbers of variables correspond with question numbers of the survey presented ;in
Appendix.
In view of the survey results and analysis of variance, it can be seen that variables management support to PM,
availability of measurement information and analysis of the current situation in strategic planning are the only ones
that have statistically different averages between both maturity Profiles 1 and 3 and 2 and 4. Thus these variables
seem to create satisfaction (or their absent creates dissatisfaction) on both PM score levels (below and above
average). Management support is a commonly known factor crucial in ensuring sufficient resources and importance
of constant PM development in the organization (Bourne et al., 2002). Measurement information availability can be
linked especially to the discussion on IS and PM (Marchand and Raymond, 2008). It clearly seems that much more
should be done to improve PI reporting and to offer easier and more centralized access to PI. Lack of analysis of the
current situation in strategic analysis can characterize the ex-post nature of PI usage. However, it can also be an
embodiment from an inaccurately working PMS. If PMS does not create usable information, the usage may be
restricted. However, this cannot be verified without further research.
Between Profiles 2 and 4 it can be examined what variables create satisfaction when PM score is above the average
level. Most noticeable observation is the importance of IS supporting performance measurement perspective.
Deficient IS systems seem to be a key reason for dissatisfaction on high PM score level. This may mean that when
basic requirements of PM are satisfied, employees start to require more from IS: reports should be automated and
data should be stored in one place. Other variables having a significant difference between these profiles are
reliability of measurement information and competence management and learning promotion. It can be interpreted
that measurement information reliability becomes an issue, when scope and detail of measurement systems
increases. For example, different units of large organizations may have inconsistent registration practices
(Jskelinen and Sillanp, 2013). Intellectual capital appears to be still a relatively new aspect in PM which seems
to be best captured by the organizations in Profile 4. The importance of intellectual capital (e.g. Kaplan and Norton,
2004) and IS in automating measurement systems (e.g. Marr and Neely, 2001) reflects well the changes currently
going on in PM development.
Between Profiles 1 and 3 can be examined what variables create satisfaction when PM score is below the average
level. Definition of measurement specifications is a basic task in PM development which has been known for a long
time (e.g. Neely et al., 1995). Measurement specifications ensure that every measure have clear definitions,
purposes and responsibilities. In light of this observation, it can be argued that definition of measurement
specifications is the first aspect to be improved when creating more satisfaction toward performance measurement.
The other observation characterizing the differences between Profiles 1 and 3 is more difficult to interpret. It seems
that at the basic level of PM practices, satisfaction is increased when measurement is used in setting strategic
targets. However, this is not a distinguishing factor between Profiles 2 and 4 as, according to the results, these
profiles have already advanced in this aspect.
6. Conclusions
Section:
Choose
This paper presented a new model and evaluation tool for assessing the maturity of PM. The model presented in this
study positions itself between the highly complex and extensive survey instruments and the pragmatic models
presented by consultants. The novelty value of the model relates to the combination of variables extracted from
existing literature, written criteria in maturity levels and maturity profiles created from empirical data which are linked
to variables creating personnel satisfaction. Furthermore, the initial experiences and respondent comments indicate
that the model can be applied in many different organizational contexts.
A key starting point for this research was the lack of research regarding PI usage. This was also presumed as a
practical challenge. However, it seems that performance measurement practices lack behind the academic
discussion. The results from the testing of the model indicate that there still are challenges in the basic technical
measurement aspects. For example, measurement specifications are not properly defined. Also IS cause many
difficulties to performance measurement meaning, e.g. limited availability of information. The implementation of
performance measurement is complicated by limited managerial support. The results also reveal that PI does not
support strategic planning.
The academic contribution of this paper can be assessed from the perspective of the research gaps identified. First,
the model proposed is one of the first models to utilize extensive survey data and quantitative methods in the
construction of distinctive maturity profiles. It provides an overview of the prevailing PM practices which can support
in filling the gap between research and practice. Second, the resulting model is argued to stand out from the existing
literature by providing the balance between rigor and relevance desired in design science research. While the
empirical tests suggest a reasonable benefit-burden ratio, the transparent research process should also demonstrate
the criteria of academic rigor.
For practitioners, this model offers an easy-to-implement tool that can be used in self-assessment. The model has
been proven to have good problem-solving power and to work well as a basis for development. In addition, the
empirical data collected for the model described in this paper offer practitioners an easy way to position their own
organization in relation to other organizations. Testing of the model also yielded specific variables to concentrate on
when better personnel satisfaction with PM is aspired to.
Like all empirical research, this paper also has its limitations. The model itself aims to be widely applicable but its
testing was limited. First of all, the respondent group used in this paper was limited to Finland, and is therefore
country specific. However, there were different industries represented which improves external validity of the results.
Also, the selection of the organizations to test the model might be biased as it is based on a non-random sample and
the number of respondents could have been larger per organization to further reduce subjectivity. Also the presented
model and survey tool can be further improved. For example, satisfaction could have been surveyed on more exact
level (e.g. on a six-step scale in both questions) to increase sensitivity in this measure.
More specific guidance for organizations on how to effectively apply the results of the model in development work
should have been given. This would have required more research and case-specific studies. Second, to further
improve the relevance of the model, industry-specific parts should be developed. This way the core of the model
would remain the same but the particularities of different fields of business would be captured in full.
References
1.
Aho, M. (2012), What is your PMI? A model for assessing the maturity of performance management in
organizations, PMA 2012 Conference, Cambridge, UK, July 11-13.
2.
Andriessen, D. (2004), Reconciling the rigor-relevance dilemma in intellectual capital research, The
Learning Organization , Vol. 11 Nos 4/5, pp. 393-401. [Abstract] [Infotrieve]
3.
Argyris, C. and Schn, D. (1978), Organizational Learning: A Theory of Action Perspective , Addison
Wesley, Reading, MA.
4.
Balanced Scorecard Institute (2010), The strategic management maturity modelTM, available
at: www.balancedscorecard.org/Portals/0/PDF/BSCIStrategicManagementMaturityModel.pdf (accessed
June 25, 2012).
5.
Bititci, U. , Garengo, P. and Ates, A. (2012), Towards a maturity model for performance and management,
PMA 2012 Conference, Cambridge, UK, July 11-13.
6.
Bititci, U.S. , Ackermann, F. , Ates, A. , Davies, J. , Garengo, P. , Gibb, S. , MacBryde, J. , Mackay,
D. ,Maguire, C. and Van Der Meer, R. (2011), Managerial processes: business process that sustain
performance, International Journal of Operations & Production Management , Vol. 31 No. 8, pp. 851-891.
[Abstract], [ISI] [Infotrieve]
7.
Bititci, U.S. , Carrie, A.S. and McDevitt, L. (1997), Integrated performance measurement systems: an audit
and development guide, The TQM Magazine , Vol. 9 No. 1, pp. 46-53. [Abstract] [Infotrieve]
8.
Bourne, M. , Kennerley, M. and Franco-Santos, M. (2005), Managing through measures: a study of impact
on performance, Journal of Manufacturing Technology Management , Vol. 16 No. 4, pp. 373395. [Abstract] [Infotrieve]
9.
Bourne, M. , Neely, A. , Platts, K. and Mills, J. (2002), The success and failure of performance
measurement initiatives: perceptions of participating managers, International Journal of Operations &
Production Management , Vol. 22 No. 11, pp. 1288-1310. [Abstract], [ISI] [Infotrieve]
10.
Brudan, A. (2009), Assessing organizational performance management capability the performance
management maturity model, available at: www.smartkpis.com/blog/tag/performance-managementmaturity-model/ (accessed June 25, 2012).
11.
Bryman, A. and Bell, E. (2007), Business Research Methods , Oxford University Press, New York, NY.
12.
Cocca, P. and Alberti, M. (2010), A framework to assess performance measurement systems in
SMEs,International Journal of Productivity and Performance Management , Vol. 59 No. 2, pp. 186200. [Abstract] [Infotrieve]
13.
de Bruin, T. , Rosemann, M. , Freeze, R. and Kulkarni, U. (2005), Understanding the main phases of
developing a maturity assessment model, Proceedings of 16th Australasian Conference on Information
Systems (ACIS), Sydney, November 30-December 2.
14.
de Waal, A. , Kourtit, K. and Nijkamp, P. (2009), The relationship between the level of completeness of a
strategic performance management system and perceived advantages and disadvantages,International
Journal of Operations & Production Management , Vol. 29 No. 12, pp. 1242-1265.[Abstract], [ISI] [Infotrieve]
15.
Dresner, H. (2010), About the performance culture maturity modelTM, available
at: https://sites.google.com/site/performanceculturesite/home/more-on-the-pcmm (accessed June 25, 2012).
16.
Emory, C. (1985), The Irwin series in information and decision sciences, Business Research Methods , 3rd
ed., Homewood, Chicago, IL.
17.
Evans, J.R. (2004), An exploratory study of performance measurement systems and relationships with
performance results, Journal of Operations Management , Vol. 22 No. 3, pp. 219232. [CrossRef], [ISI] [Infotrieve]
18.
Ferreira, A. and Otley, D. (2009), The design and use of performance management systems: an extended
framework for analysis, Management Accounting Research , Vol. 20 No. 4, pp. 263282. [CrossRef] [Infotrieve]
19.
Franco-Santos, M. , Lucianetti, L. and Bourne, M. (2012), Contemporary performance measurement
systems: a review of their consequences and a framework for research, Management Accounting
Research , Vol. 23 No. 2, pp. 79-119. [CrossRef] [Infotrieve]
20.
Garengo, P. , Biazzo, S. and Bititci, U.S. (2005), Performance measurement systems in SMEs: a review for
a research agenda, International Journal of Management Reviews , Vol. 7 No. 1, pp. 2547. [CrossRef], [ISI] [Infotrieve]
21.
Gelderman, M. (1998), The relation between user satisfaction, usage of information systems and
performance, Information & Management , Vol. 34 No. 1, pp. 11-18. [CrossRef], [ISI] [Infotrieve]
22.
Gibson, C.E and Nolan, R.L. (1974), Managing the four stages of EDP growth, Harvard Business Review ,
Vol. 27 No. 1, pp. 76-88. [Infotrieve]
23.
Hatry, H.P. (2006), Performance measurement: getting results, 2nd ed., Urban Institiute Press,Washington,
DC.
24.
Homburg, C. , Artz, M. and Wieseke, J. (2012), Marketing performance measurement systems: does
comprehensiveness really improve performance?, Journal of Marketing , Vol. 76 No. 3, pp. 5677. [CrossRef], [ISI] [Infotrieve]
25.
Ittner, C.D. , Larcker, D.F. and Randall, T. (2003), Performance implications of strategic performance
measurement in financial services firms, Accounting, Organizations and Society , Vol. 28 No. 7, pp. 715741. [CrossRef], [ISI] [Infotrieve]
26.
Jskelinen, A. and Sillanp, V. (2013), Overcoming challenges in the implementation of performance
measurement: case studies in public welfare services, International Journal of Public Sector Management ,
Vol. 26 No. 6, pp. 440-454. [Abstract] [Infotrieve]
27.
Kaplan, R.S. and Norton, D.P. (2004), Masuring the strategic readiness of intangible assets, Harvard
Business Review , Vol. 82 No. 2, pp. 52-63. [ISI] [Infotrieve]
28.
Kennerley, M. and Neely, A. (2002), A framework of the factors affecting the evolution of performance
measurement systems, International Journal Of Operations & Production Management , Vol. 22 No. 11,
pp. 1222-1245. [Abstract], [ISI] [Infotrieve]
29.
Lts, K. , Haldma, T. and Mller, K. (2011), Performance measurement patterns in service companies: an
empirical study on Estonian service companies, Baltic Journal of Management , Vol. 6 No. 3, pp.357377. [Abstract] [Infotrieve]
30.
Maier, A.M. , Eckert, C.M. and Clarkson, J.P. (2006), Identifying requirements for communication support: a
maturity grid-inspired approach, Expert Systems with Applications , Vol. 31 No. 4, pp. 663672. [CrossRef], [ISI] [Infotrieve]
31.
Maier, A.M. , Moultrie, J. and Clarkson, P. (2012), Assessing organizational capabilities: reviewing and
guiding the development of maturity grids, Engineering Management, IEEE Transactions On , Vol. 59 No. 1,
pp. 138-159. [CrossRef], [ISI] [Infotrieve]
32.
Malmi, T. and Brown, D.A. (2008), Management control systems as a package opportunities, challenges
and research directions, Management Accounting Research , Vol. 19 No. 4, pp. 287300. [CrossRef] [Infotrieve]
33.
Marchand, M. and Raymond, L. (2008), Researching performance measurement systems: an information
systems perspective, International Journal of Operations & Production Management , Vol. 28 No. 7,
pp. 663-686. [Abstract], [ISI] [Infotrieve]
34.
Marr, B. and Neely, A. (2001), Organisational performance measurement in the emerging digital
age,International Journal of Business Performance Management , Vol. 3 No. 2, pp. 191215. [CrossRef] [Infotrieve]
35.
Marx, F. , Wortmann, F. and Mayer, J.H. (2012), A maturity model for management control
systems,Business & Information Systems Engineering , Vol. 4 No. 4, pp. 193-207. [CrossRef] [Infotrieve]
36.
Matta, K.F. (1989), A goal-oriented productivity index for manufacturing systems, International Journal of
Operations & Production Management , Vol. 9 No. 4, pp. 66-76. [Abstract] [Infotrieve]
37.
Najmi, M. , Rigas, J. and Fan, I. (2005), A framework to review performance measurement
systems,Business Process Management Journal , Vol. 11 No. 2, pp. 109-122. [Abstract] [Infotrieve]
38.
Neely, A. , Gregory, M. and Platts, K. (1995), Performance measurement system design: a literature review
and research agenda, International Journal of Operations & Production Management , Vol. 15 No. 4,
pp. 80-116. [Abstract], [ISI] [Infotrieve]
39.
Neely, A. , Mills, J. , Platts, K. , Richards, H. , Gregory, M. , Bourne, M. and Kennerley, M. (2000),
Performance measurement system design: developing and testing a process-based
approach,International Journal of Operations & Production Management , Vol. 20 No. 10, pp. 1119-1145.
[Abstract], [ISI] [Infotrieve]
40.
Neely, A. , Richards, H. , Mills, J. , Platts, K. and Bourne, M. (1997), Designing performance measures: a
structured approach, International journal of Operations & Production Management , Vol. 17 No. 11,
pp.1131-1152. [Abstract], [ISI] [Infotrieve]
41.
Nudurupati, S.S. , Bititci, U.S. , Kumar, V. and Chan, F.T. (2011), State of the art literature review on
performance measurement, Computers & Industrial Engineering , Vol. 60 No. 2, pp. 279290. [CrossRef], [ISI] [Infotrieve]
42.
Otley, D.T. and Berry, A.J. (1980), Control, organisation and accounting, Accounting, Organizations and
Society , Vol. 5 No. 2, pp. 231-244. [CrossRef] [Infotrieve]
43.
Podsakoff, P.M. and Organ, D.W. (1986), Self-reports in organizational research: problems and
prospects, Journal of Management , Vol. 12 No. 4, pp. 531-544. [CrossRef], [ISI] [Infotrieve]
44.
Royce, W. (1999), Software Project Management , Pearson Education, Addison Wesley, Reading.
45.
Salleh, N.A.M. , Jusoh, R. and Isa, C.R. (2010), Relationship between information systems sophistication
and performance measurement, Industrial Management & Data Systems , Vol. 110 No. 7, pp. 9931017. [Abstract], [ISI] [Infotrieve]
46.
Schlfke, M. , Silvi, R. and Mller, K. (2013), A framework for business analytics in performance
management, International Journal of Productivity and Performance Management , Vol. 62 No. 1, pp.110122. [Abstract] [Infotrieve]
47.
Speckbacher, G. , Bischof, J. and Pfeiffer, T. (2003), A descriptive analysis on the implementation of
balanced scorecards in German-speaking countries, Management Accounting Research , Vol. 14 No. 4,
pp. 361-388. [CrossRef] [Infotrieve]
48.
Stivers, B.P. , Covin, T.J. , Hall, N.G. and Smalt, S. (1998), How nonfinancial performance measures are
used, Management Accounting , Vol. 79 No. 4, pp. 44-48. [Infotrieve]
49.
Tangen, S. (2005), Demystifying productivity and performance, International Journal of Productivity and
Performance Management , Vol. 54 No. 1, pp. 34-46. [Abstract] [Infotrieve]
50.
Tapinos, E. , Dyson, R. and Meadows, M. (2005), The impact of performance measurement in strategic
planning, International Journal of Productivity and Performance Management , Vol. 54, Nos 5/6, pp.370384. [Abstract] [Infotrieve]
51.
Tung, A. , Baird, K. and Schoch, H.P. (2011), Factors influencing the effectiveness of performance
measurement systems, International Journal of Operations & Production Management , Vol. 31 No. 12,
pp. 1287-1310. [Abstract], [ISI] [Infotrieve]
52.
Van Aken, E.M. , Letens, G. , Coleman, G.D. , Farris, J. and Van Goubergen, D. (2005), Assessing maturity
and effectiveness of enterprise performance measurement systems, International Journal of Productivity
and Performance Management , Vol. 54 Nos 5/6, pp. 400-418. [Abstract] [Infotrieve]
53.
Van Aken, J.E. (2007), Design science and organization development interventions aligning business and
humanistic values, The Journal Of Applied Behavioral Science , Vol. 43 No. 1, pp. 6788. [CrossRef] [Infotrieve]
54.
Wettstein, T. and Kueng, P.A. (2002), A maturity model for performance measure systems, in: Brebbia,
C. and Pascola, P. (Eds), Management Information Systems: GIS and Remote Sensing , WIT
Press,Southampton, pp. 113-122.
There are four choices in each question. The first one describes undeveloped and the fourth describes a
sophisticated level of measurement practices. It is important to note that top level is not always the most appropriate
level in each organization. You should choose the description which best illustrates the status in your organization.
When going up in the evaluation scale all the aspects described at the lower levels must be fulfilled. When there is
more than one criterion in the description, all the criteria must be fulfilled in order to reach the level in question. Be as
realistic as possible and use your overall impression of your workplace.
Background Information
In this survey, measurement is linked to all quantitative information (e.g. employee satisfaction survey result, leadtimes, cost information) gathered from the organizational operations. More specifically, measurement information
refers to information supporting managerial needs.
A. Performance measurement practices
(1) Scope of measurement
Measurement is linked to operative level and it includes some non-financial measures (e.g. employee
satisfaction survey).
Measurement reaches operative processes (e.g. customer satisfaction to the delivery times of individual
products) and has an optimal balance of financial and non-financial measures. The measures used are
linked to the needs of different stakeholders.
Linkages between measurement objects are analyzed and modeled (e.g. strategy map). There is a common
understanding in the organization regarding the factors that should be improved in order to affect the main
measurement results.
There are several interpretations of the measurement information. Personnel do not trust the measurement
information.
There are different interpretations on some part of the measurement information. Decision-makers trust the
measurement information.
Measures provide mainly unambiguous information. Personnel trust the measurement information.
Measures are defined to provide proactive information supporting the reaching of strategic objectives.
Measurement specifications have been discussed but they are not documented.
New measures are taken into use when needed but the usefulness of the old measures is not evaluated.
There is a regular evaluation and development of measures. Old measures are discarded when necessary.
Measurement information is gathered manually to a large extent. Only financial measurement information is
gathered automatically.
Most of the measurement information is gathered with IS systems which enable the provision of real-time
measurement information.
Measurement information is gathered automatically and stored centrally. The most important IS systems
communicate which each other.
The analysis and reporting of measurement information is carried out with office software (word processing,
spreadsheets) when needed.
Measurement information is analyzed and reported with simple and purpose-build tools such as spreadsheet
models and macros. Visualization is used in refining measurement information.
Measurement information is analyzed and reported with purpose-build programs. Planning and decisionmaking is supported with the visualization of measurement information.
There can be measurement information available but only few know where.
(10a) How satisfied are you with the performance measurement practices and systems in your organization?
Very dissatisfied
Dissatisfied
Satisfied
Very satisfied
Open-ended question
Measurement is regarded as useful. The views of personnel are taken into account when developing
measurement.
Work community feels that measurement improves fairness. Personnel initiate measurement improvement
efforts.
Managers regard measurement as important and employees are encouraged toward measurement.
Personnel obtain relevant measurement information in a random manner. Personnel do not know the targets
of measures related to them.
Personnel obtain frequently measurement information which is related to them. Supervisors know the
targets of measures relevant in their administration.
Measurement results relevant to the personnel are communicated interactively. All personnel know the
measurement targets linked to them.
Measurement results are regularly communicated to the key stakeholders but in a non-systematic way.
Measurement results are regularly communicated to the key stakeholders with a pre-determined reporting
template.
Measurement results from the previous years are acknowledged in setting strategic targets.
Measurement information is used both in setting strategic targets and in questioning earlier strategic
decisions.
Measures are used in the identification of development objects (e.g. identification of bottlenecks in the
production process)
Measures are used to support the preparation of action plans (e.g. prioritizing development objects)
Definition and implementation of action plans are done systematically and mainly based on measurement
information (e.g. action plans are prioritized and controlled with the support of measurement information)
Resource usage is supported with measurement information (e.g. personnel engaged in a certain project).
Resource sharing is supported with measurement information (e.g. decisions regarding personnel training).
Decisions on resource allocation (e.g. budgeting) are made based on measurement information.
Personnel competencies are constantly monitored (e.g. self-evaluations) and decisions supporting learning
are carried out based on measurement information.
Development targets are identified based on measurement information and personnel are provided with
individual development plans.
(20) Benchmarking
Measurement information is used in the analysis of customers (e.g. identifying sales potential by analyzing
the turnover of customers).
Measurement information is used in analyzing other external stakeholders (e.g. identification of market
potential of new products).
Measurement information is used as a basis of communication with key external stakeholders. (e.g.
optimizing the supply chain performance and customer value with measurement information).
There is a clear linkage between rewarding principles and unit level measurement targets.
There is a clear linkage between rewarding principles and personal level measurement targets.
(23a) How satisfied you are with the usage of measurement information in your organization?
Very dissatisfied
Dissatisfied
Satisfied
Very satisfied
Open-ended question
Dr Aki Jskelinen works as a Research Fellow on the Performance Management Team at Tampere University of
Technology, Finland. His research interests focus on performance measurement and management especially in
service operations. He has also participated in many development projects related to performance management in
Finnish organizations. Dr Aki Jskelinen is the corresponding author and can be contacted at:
aki.jaaskelainen@tut.fi
Juho-Matias Roitto (MSc) works as a Project Researcher on Performance Management Team at Tampere University
of Technology, Finland. His special interest area lies within the better utilization of existing performance information in
organizations.
Acknowledgments:
This study was conducted in a research project funded by Finnish Work Environment Fund. The authors are
thankful to Dr Paula Kujansivu for her efforts in the beginning of the project supporting the first ideas of this
paper.