You are on page 1of 23

Designing a model for profiling organizational performance management

Aki Jskelinen (Department of Industrial Engineering, Tampere University of Technology, Tampere,


Finland)
Juho-Matias Roitto (Department of Industrial Management, Tampere University of Technology, Tampere,
Finland)
Citation:
Aki Jskelinen , Juho-Matias Roitto , (2015) "Designing a model for profiling organizational performance
management", International Journal of Productivity and Performance Management, Vol. 64 Iss: 1, pp.5 - 27
DOI
http://dx.doi.org/10.1108/IJPPM-01-2014-0001
Downloads:
The fulltext of this document has been downloaded 155 times since 2015
Abstract:
Purpose
The purpose of this paper is to design and test a model for analyzing organizational performance
management (PM) practices.
Design/methodology/approach
This study follows the design science approach. Variables affecting the status of PM are reviewed and
classified based on existing literature. These variables are analyzed and a compact set of critical variables
are chosen to represent PM maturity. Designed model is implemented in practice as a survey receiving 271
responses, and tested by using both quantitative and qualitative approach.
Findings
The survey data are utilized in the development of four distinct PM maturity profiles. The empirical results
provide understanding on the current PM maturity level and common development targets in Finnish
organizations.
Research limitations/implications
External validity of the research is compromised by the context and respondent group. More in-depth
qualitative studies could provide more understanding on the causes of presented findings.
Practical implications
The proposed model offers best practices to develop PM and identifies variables crucial to create
satisfaction toward PM. The presented profiles also help in evaluating the status of PM in the organization
examined.
Originality/value
The originality of the new model relates to its balance between rigor and relevance. In addition, the study
is one of the first attempts to widely apply PM maturity models in practice. A distinctive feature of this study
is the maturity profiles which are built upon empirical data.
Keywords:
Performance measurement, Performance management, Profiling, Maturity model, Personnel satisfaction
Publisher:
Emerald Group Publishing Limited

Article
1. Introduction

Section:
Choose

The benefits of performance measurement have been studied a lot (e.g. Franco-Santos et al., 2012) but there is a
lack of understanding of why the full potential of measurement is rarely exploited in practice (Bourne et al., 2005).
While there are clearly also problems in technical measurement aspects (e.g. validity of measures) (Tung et al.,
2011), it has been noted that problems often occur in the use of performance measurement (Bourne et al.,
2005; Stivers et al., 1998). Concurrently, the maturity of utilizing performance measurement has been linked to an
organizations capabilities and performance-driven behavior (Bititci et al., 2011;Bourne et al., 2005; Ittner et al., 2003)
laying ground for the need to gain more in-depth understanding on the status of using measurement in different
purposes.
There are many studies on the status of performance measurement in different contexts. These studies (e.g. Lts et
al., 2011;Tangen, 2005) often focus on the objects of measurement (e.g. the balance of measures used) or the
technical aspects (i.e. the types of measures or measurement frameworks used) instead of utilizing performance
information (PI) in management. There are also studies assessing the maturity of performance measurement and
management in organizations (e.g. Aho, 2012; Bititci et al., 2012;Cocca and Alberti, 2010; Van Aken et al.,
2005; Wettstein and Kueng, 2002). Various models and tools have been proposed both for managerial and academic
purposes. They deepen the understanding of current measurement practices and can be utilized in pinpointing
common development areas. However, the work with maturity assessment is still in progress. For example, the
practical application of the models is often limited (Marx et al., 2012).
This paper designs and tests a model for profiling the maturity of performance management (PM) in organizations.
The study follows the design science approach. First, existing maturity models are reviewed, their shortcomings are
identified and a new classification of critical variables extracted from the literature is presented. Second,
measurement scales are defined for variables identified. Third, a maturity analysis tool is applied as a survey
receiving 271 responses. The tool is evaluated by means of qualitative open-ended responses and statistical
analysis. Analysis of variance is used to review maturity profiles based on the empirical material gathered.
2. Methodology
Section:
Choose

This paper applies the design science approach, which aims to develop knowledge that can be used in solving
problems in the field (Andriessen, 2004; Van Aken, 2007). In design science, the effectiveness and validity of the
solutions are evaluated not only by the researchers but also by the users in the field of application. As the ultimate
aim for this research is to construct a model to profile organizations according to their PM and to give guidelines for
future development, design science was a natural way to pursue this aim.
There are detailed instructions for constructing a maturity model with a design science approach (e.g. de Bruin et al.,
2005; Maier et al., 2012). The six main phases by de Bruin et al. (2005) are presented in Figure 1. This study follows
the first five of these steps. Maintain phase deals more with the usage of a model which is not in the scope of this
paper.
In the first phase, the scope and target population of the model are defined. In this study, the distinctive scope of the
new model was linked to the usage of performance measurement. A further aim was to develop a model which can
be deployed in many different organizations without specified limitations.
The second phase includes the definition of evaluation variables and execution plans. Evaluation variables can be
identified both analytically (top-down) or by combining the existing literature (bottom-up). De Bruin et
al. (2005) and Maier et al. (2012) agree that bottom-up approach works well when variables thought to represent
maturity are known. As there already is a decent literature on the topic, the bottom-up way was chosen in this study
and the evaluation variables were first identified from the appropriate literature. The evaluation variables identified
were classified into two main perspectives: performance measurement systems (PMS); and the usage of PI. This
work is described in more detail in Section 4.1. It was decided to carry out the actual execution of the maturity
analysis as a self-evaluation survey addressed to managers and administrative experts assumed to have sufficient
knowledge about existing PM practices.
In the third phase, the main content of the model is defined. This meant in this study the development of an
appropriate survey instrument with maturity levels describing various ways of operating in each of the evaluated
variables. This phase is described in more detail in Section 4.2. In the fourth phase the model goes to first trial

testing. In our study, a web-based survey solution was designed. It was further developed through three different
tests including researchers and practitioners. This phase is reported in more detail in Section 4.3.
The final phase of this study was the deploy phase, in which the model was tested with the target population. In this
phase, this study differed from the de Bruin et al. (2005) process, since the gathered empirical material was utilized in
the further definition of maturity profiles and hence the content of the model. This task required the use of survey
instead of interviews in order to improve external validity of the profiles. The model was evaluated for its validity and
reliability to represent measurement rigor (Emory, 1985, p. 94) and practicality and relevance to represent managerial
relevance (Matta, 1989, p. 67). This phase is described in Sections 5.1-5.4.
3. Literature review
Section:
Choose

3.1. Key concepts


3.1.1. Maturity model and related concepts
The concept of maturity model was originally presented by Gibson and Nolan (1974). They introduced a four-stage
model that assesses the maturity of an information systems (IS) function across four areas (budget, applications,
personneland management techniques). Since then, the maturity model concept has been widely used in various
management research fields (e.g. process management) (Bititci et al., 2012). The key elements of maturity models
are dimensions (specific capability areas, etc.), maturity levels and an assessment instrument (de Bruin et al., 2005).
In this study, the term maturity model is seen as an umbrella concept which is divided into three different parts. The
maturity model framework defines the evaluation variables to be measured and the reasons why these variables are
chosen. Maturity model instrument defines how these variables are measured. This is done with maturity levels.
Maturity levels describe the alternative ways of operating in each of the variables. The maturity profile is the overall
status of an organization, which combines all the necessary evaluation variables.
3.1.2. Maturity in PM
The concept of maturity in PM is often considered ambiguous. Some related terms are the level of sophistication
(e.g. Evans, 2004), the level of completeness and scope (e.g. Evans, 2004; de Waal et al., 2009) or the evolutionphase of the system (e.g. Speckbacheret al., 2003). It has also been stated that the ultimate test for PI is whether it is
used by managers or not (Hatry, 2006). However, it is also important to take into account the context in which PM is
functioning. This is why personnel satisfaction toward PM captures essentially how well PMS is fulfilling its task
(cf. Gelderman, 1998).
In the light of these definitions, this research considers the maturity of PM as a three-way phenomenon consisting
from the level of scope, sophistication, and satisfaction toward the system. These aspects combined give an
extensive overview of PM in the organization. However, these three aspects cannot be ranked in order of importance.
Scope of PM is defined in this study as all the areas where PI is used as well as the scope of measures used. Some
researchers have also labeled this as comprehensiveness of PMS (e.g. Homburg et al., 2012). Sophistication of PM
has often been researched only through IS sophistication, which has been proven to have an effect on the use of PI
(Salleh et al., 2010). Schlfke et al. (2013)define PM sophistication as the possibility to provide and use information at
a more detailed level. This same definition is utilized in this study and the term more detailed level is used to mean
both in-depth measurement information and more specific organizational area. For example, this can be seen as the
ability to drill down into measurement results.
3.2. Overview of the existing PM maturity research
The first maturity models of performance measurement and management were built upon the existing
recommendations focussing on the search for the optimal design and content of a PMS (e.g. Neely et al.,
1997; Bititci et al., 1997; Najmi et al., 2005). The amount of research concerning maturity and PM has increased
greatly in the last few years. Hence, it is justified to use them as a starting point for a new, overarching model. The
presented models vary, having different backgrounds and implementations. Therefore it is appropriate to review the
models by illustrating their varying characteristics (cf. Marx et al., 2012).
Most of the models appear to be segmented to any organization with a few exceptions such as the model by Cocca
and Alberti (2010) intended for SMEs. The execution of the models is designed either for interactive assessment in a
small set of firms or large surveys for the self-assessment of employees (Bititci et al., 2012; Van Aken et al.,
2005; Wettstein and Kueng, 2002).

Considering implementation of the models, on the one hand there are academic models with more aspects, often
leading into a large set of survey questions (Aho, 2012; Tung et al., 2011). In contrast there are models with a
consultative background (Balanced Scorecard Institute, 2010; Brudan, 2009; Dresner, 2010) that typically present a
concise list of assessment factors (up to ten) which are then evaluated by using grades with written descriptions
fitting into one A4 page.
The scope of the models likewise varies a great deal. There are models intended to assess the maturity of
performance measurement (Cocca and Alberti, 2010; Van Aken et al., 2005; Wettstein and Kueng, 2002). They
discuss relevant aspects in performance measurement (such as the content of measurement and the measurement
process). Then there are models that are more focussed on assessing PM as a part of management (Aho,
2012; Bititci et al., 2012). These models investigate aspects such as performance-oriented culture, strategic
orientation and performance reporting.
Despite the growing number of studies, the work on the maturity assessment of PM is ongoing. The number of actual
maturity models (containing maturity model framework, maturity model assessment instrument and maturity profiles)
is limited and none of these models are established or overarching. Three main research gaps can be identified.
First, many models concentrate on the design of PMS, paying less attention to measurement usage (Bititci et al.,
2012). Issues related to measurement usage are often limited to specific aspects such as business intelligence,
analytical capability, frequency of reporting, communication of measurement results, and reviewing measures rather
than actual managerial processes and tasks (Aho, 2012; Wettstein and Kueng, 2002).
Second, many models do not provide overall maturity profiles based on an empirical data set (Marx et al.,
2012; Wettstein and Kueng, 2002). This appears to be caused by the limited empirical testing of the models. In their
review, Marx et al. (2012) noted that only two models had explicitly been tested. Many models have been applied as
interactive audits in single cases (Bititci et al., 2012) while more extensive empirical data is often lacking (Tung et al.,
2011).
Third, many audit instruments are complex and time consuming to use (Cocca and Alberti, 2010). Extensive surveys
(e.g. Tung et al., 2011) may be theoretically valid but they can be difficult to apply in management practice.
However, Marx et al. (2012) noted the growing number of consultancy originating models in the area. They argue that
such models lack theoretical foundation and are derived from an arbitrary design method. Hence it appears that the
right balance between academic rigor and practical relevance needs to be found.
Addressing these research gaps identified is the underlying motivation of this study. The first gap is tackled in Section
4 summing up the existing maturity models while the other two are under investigation in Section 5, which is the
empirical part of this study.
4. Defining the design and content of the model
Section:
Choose

4.1. Selecting critical variables from existing literature


First, the different variables concerning maturity of PM were identified from the literature. Literature used consisted
from all the academic maturity models identified, and surveys assessing the effect of specific factors on PM practices.
Different combinations of terms performance measurement, PM, management control, maturity, effectiveness, and
assessment/review were utilized in academic search engines.
Although a systematical search process was not used, a substantially wide range of factors related to PM were
extracted. From six academic maturity models (Cocca and Alberti, 2010; Wettstein and Kueng, 2002; van Aken et al.,
2005; Bititci et al., 2012; Aho, 2012;Marx et al., 2012; Najmi et al., 2005) and two surveys focussing on different PM
aspects (Bititci et al., 2011; Tung et al., 2011), 174 different variables were identified at the first stage. Content
analysis was utilized in identifying variables with similar meanings. This phase led to wider evaluation classes and the
total of 48 variables. These were further evaluated and only those variables were chosen that can be linked to at least
two existing studies reviewed. Finally, 39 variables were chosen for further analysis.
These variables were classified into two distinct but interconnected themes: PMS and PI usage. PMS perspective
was further divided into two categories: performance measurement practices and IS supporting performance
measurement. IS represents the hard technical aspect of performance measurement which is important to
recognize alongside with performance measurement practices and the content of PMS (Nudurupati et al., 2011).

Whereas there is a rather established knowledge around PMS, literature on PI usage is more fragmented. There is
no established definition or model for PM which is difficult to grasp on (Tangen, 2005). Existing studies investigate
issues, such as tasks and processes of PM (Ferreira and Otley, 2009). Another perspective on using performance
measurement relates to the established research of management control (Otley and Berry, 1980) where the interest
is often in different controls that are used to direct the behavior of employees (Malmi and Brown, 2008).
Two main viewpoints for PI usage were chosen. First, factors facilitating the successful usage of performance
measurement (see e.g.Bourne et al., 2002) were gathered from the perspective of communication and commitment.
Second viewpoint was managerial tasks where managers use performance measurement information. For the sake
of practicality it was decided to leave out the reasons why managers use performance measurement information,
which also helped to prioritize the variables in the prior literature. Since there were many possible alternatives to
assess managerial tasks and functions where performance measurement information is used, focus was on two
perspectives: use of measurement information in planning (ex-ante perspective) and in leadership and management
(ex-post perspective). Table I illustrates the main perspectives chosen and critical variables which are discussed next
in more detail.
It is notable that IS have been rarely incorporated in the studies of performance measurement despite their
acknowledged significance (Marchand and Raymond, 2008). Three critical variables were identified to measure the
ability of IS to support the automating of measurement systems (cf. Marr and Neely, 2001). Existing maturity models
capture well the recommendations of PMS practices (e.g. Neely et al., 1997). Six variables were chosen to capture
the essentials of these. Hence, total of nine variables were identified representing the PMS aspect.
Two essential factors have often been found to facilitate performance measurement implementation: communication
and commitment. These are also frequently mentioned in the change management literature. Commitment of
personnel and management has typically been emphasized when discussing successful implementation of PMS
(Kennerley and Neely, 2002). Communication of performance results reflects the aspects of information flows
(Ferreira and Otley, 2009; Jskelinen and Sillanp, 2013) known to facilitate the usage of performance
measurement. The perspective of communication and commitment was captured with four critical variables.
In performance measurement literature the importance of connection between measures and strategy has often been
highlighted (e.g. Cocca and Alberti, 2010; Tangen, 2005; Neely et al., 2000). Much of the existing research relates to
the integration of performance measurement and strategy. However, the role of performance measurement in
strategic planning has been considered less, even though it has been found to characterize modern strategic
planning (Tapinos et al., 2005) and enable double-loop learning (Argyris and Schn, 1978). Planning was decided to
be examined in two aspects: strategic long-range planning and action planning (cf. Malmi and Brown, 2008). Three
variables were identified to represent the strategy and planning perspective.
The final perspective was linked to leadership and management. Since the broad nature of this viewpoint, balanced
PM approach was applied. As balanced scorecard is one of the few established models in the performance
measurement literature, its perspectives were utilized as a starting point for examining the issue. These four
viewpoints were further analyzed and only one factor roughly representing each of the four balanced perspectives
was chosen: resource allocation (financial management), benchmarking (process management), scanning external
environment (customer) and competence management (learning and growth). Finally, one variable related to
rewarding was also incorporated. Rewarding is often regarded as an integral part of PM models (Ferreira and Otley,
2009).
4.2. Constructing the maturity levels for survey tool
In the construction of the survey tool a balance between extensive survey instruments and the pragmatic maturity
analysis was sought. Four-step maturity levels for each critical variable were designed which are thought to represent
the sophistication level in each critical variable. Maturity levels mean written descriptions for a chosen number of
evaluation levels (Table II). Literature of best practices of performance measurement and management (e.g. Bititci et
al., 1997; Neely et al., 2000; Najmi et al., 2005) as well as consultancy originated maturity models (Balanced
Scorecard Institute, 2010; Brudan, 2009; Dresner, 2010) were utilized in defining the maturity levels. As suggested
by Maier et al. (2012), best and worst practices were determined first. Authors own intuition was also needed in
defining the Levels 2 and 3.
Written assessment criteria are a way to stand out from many existing maturity surveys using Likert scales. Written
criteria have their own downsides relating for example to the definition of assessment levels that are clearly different
from each other. However, at least three factors justify the written maturity levels:

1.

while Likert scales are highly subjective and often difficult to answer, written maturity levels provide clearer
and more objective alternatives for the respondent (Cocca and Alberti, 2010);

2.

written maturity levels provide best practices for the respondent and facilitate the identification of
development ideas already when completing the survey as well as generating discussion and raising
awareness (Maier et al., 2006); and

3.

written maturity levels enable responding to the survey without external consultants and knowledge of
practices outside own organization (Garengo et al., 2005).

4.3. Pre-testing the survey tool


The assessment survey was pre-tested in three phases. First, it was commented by five research colleagues,
resembling the idea of testing (Royce, 1999). This led to several minor improvements to specific assessment
criteria. The clarity and unambiguity were improved. A key addition was the inclusion of statements regarding the
overall personnel satisfaction with PMS and PI usage. They were incorporated into one of the three components of
maturity in this research. In order to keep the survey structure compact, the satisfaction-related questions were not
systematically asked for each variable evaluated. Open-ended questions about the causes of satisfaction were also
added at this stage. This test phase also caused a need to test the survey tool with practitioners.
The second phase, reflecting the idea of testing (Royce, 1999), was carried out in focus groups since interaction
was deemed necessary in explaining the rationale and idea underling the evaluation criteria. First focus group was
conducted in a software company with 100 employees. The company had recently developed its measurement
practices. The CEO, a controller and a project manager responded to the survey and told the researcher how they
understood each question and what their responses meant in practice. This phase resulted in a few improvements to
the clarity of questions and it also led to the rejection of one evaluation criterion deemed irrelevant.
testing was continued through a focus group attended by the representatives of three public organizations (steering
committee of a research project). This phase was carried out mainly in order to find out whether the survey fits in
public sector context. Five controllers, analysts and managers attended the event led by two researchers. Some
changes to the order of questions were made after the event. It was deemed important that easier questions should
be asked first. The phase also led to the addition of examples to some of the statements. This was a compromise
between context-specific relevance (detail of description) and external validity. Finally, an open-ended question
inviting comments and development ideas was added to the end of the survey. The resulting evaluation tool is
presented in Appendix.
5. Deploying and testing the model
Section:
Choose

5.1. Data collection and analytical methods


The model was tested with a self-evaluation survey. Survey was personally sent to 1,231 upper management, middle
management and expert respondent in Finnish organizations by e-mail. Both managers and experts were chosen to
the target population. It was deemed important that the views of both information providers and information users
were represented since there were critical variables regarding both measurement and management practices.
Managerial level was not pre-defined. Different organizational contexts (public and private) were represented in order
to improve understand how widely the model can be applied. Organizational sizes varied from 50 employees to more
than 1,000 employees. Of these recipients 271 replied to the survey, giving a response rate of 22 percent.
Respondent profile is shown in Table III.
Multiple respondents from one organization were not combined when analyzing the results. There was no missing
data in respondent profiling questions as they were mandatory. In case of missing data in critical variables, it was
interpreted as Level 1 answer. No non-response bias was detected between early (answering to survey after first
mail) and late respondents (answering to survey after one to two reminders). Harmans one-factor test was carried
out for all the critical variables to assess the extent of common method bias. The first factor explains 33 percent of the
total variance. Overall, the results support the absence of significant single-source bias (Podsakoff and Organ, 1986).
Two main phases are identifiable in the testing: maturity profile creation and testing of the whole model. Basic
statistical numbers such as standard deviation, PM score average and personnel satisfaction rates were used as a
basis for maturity profiling to retain relevance and easy understandability of the model for practitioners. PM score was
calculated as a combined score from all the variables. Response options to each variable were scored from 0 to 3
points (see assessment criteria in Table II ; Level 1=0 point, and Level 4=3 points). Total score of each respondent

was compared to the theoretical maximum. Satisfaction was elicited separately for both PMS and PI usage and was
ranked from 1 to 4 in both questions (1=extremely dissatisfied, 4=extremely satisfied). Variances between defined
maturity profiles were investigated with analysis of variance (ANOVA), which is widely used in behavioral sciences
and can be used to identify the statistical significance between the averages of different groups (Bryman and Bell,
2010).
5.2. Creation of maturity profiles
The sophistication and scope evaluated with PM score and personnels satisfaction with prevailing practices were
combined. Satisfaction rate complements PM score by taking into account different kinds of needs organizations
have. Without a satisfaction rate a misleading assumption that more mature practices and technical solutions are
always better might easily have been made. Considering the cost-effectiveness of PM this might not always be the
case (Ittner et al., 2003). Therefore the maturity profiles were created using two perspectives: PM score and
satisfaction rates. This also means that unlike in traditional maturity models, profiles created for this model cannot be
ranked in order. They are built to profile, not to rank, organizations according to their PM maturity.
In order to create different maturity profiles, PM scores of organizations were divided into four different groups.
Average PM score was calculated as an average of all the different respondents scores and it was 50.2 percent of
maximum points. As PM score distribution closely follows normal distribution, standard deviation was calculated to
allow more precise positioning of organizations. Second perspective in profile creation was personnel satisfaction.
Four different satisfaction groups ranging from extremely dissatisfied to extremely satisfied were formed by combining
the two satisfaction questions (both rated from 1 to 4). A summary of score thresholds for different tiers in the PM
score and satisfaction rate are presented in Table IV.
With the help of different satisfaction and PM score groups a 44 matrix was constructed to create maturity profiles.
Although segmentation to different tiers would have allowed the creation of 16 different maturity profiles, a decision to
combine groups was made in order to maintain relevance and comprehensibility for practitioners. Combination was
made by dividing respondents into satisfied and dissatisfied respondents and separating these groups to high and
low PM score groups (below and above average). Thus four different maturity profiles were formed. These profiles
and number of respondents are shown in Figure 2.
5.3. Testing of the model
Testing of the model was done with two different aspects as recommended in design science: rigor and relevance
(Andriessen, 2004). Academic rigor was tested with reliability and validity and managerial relevance was tested with
practicality and relevance. Reliability was tested by calculating Cronbachs for each of the perspectives chosen for
the model to ensure internal consistency. All the s rise above the minimum requirement of 0.5 and more than half
exceed the limit of 0.7, which is generally considered good for model testing. All results from this test are shown
in Table V.
Validity of model was estimated with qualitative data. In all, 63 respondents (23 percent) commented on the survey
instrument itself in an open-ended question. In these comments the scope of the model was appreciated. Examples
of these comments were Workable survey! All the right things were asked. and Gives a good overview and is rather
extensive and many-sided. In the researchers subjective interpretation, most of the comments (over 50 percent)
were clearly positive. Of the rest of the comments, 25 percent were slightly negatively oriented and about 25 percent
were neutral in nature. However, some of the comments noted only relevance/practicality aspects and for this reason
the measure of validity is not without its flaws.
Relevance was measured with an answering percentage for personal e-mails which was 22 percent. This answering
percentage is comparable with earlier studies (e.g. Marx et al., 2012) and can be considered good in current times.
Practicality was tested with the median answering time which was 11 minutes. This is close to the initial target of 15
minutes, what was deemed as suitable for a quick but yet comprehensive analysis. One organization even
implemented this model to measure their PM development on a regular basis. Summary from all of these tests is
presented in Table VI.
A large share of the respondents, 50 percent, also answered to the open-ended questions about the reasons for
satisfaction/dissatisfaction with performance measurement and measurement information use. This high percentage
reveals the reflections this model can create. In many comments it was mentioned that just completing the survey
gave many new ideas for PM development. Based on all of the assessments in this section the presented model can
be seen to meet its target to be both academically rigor and managerially relevant.
5.4. Empirical results: variables creating personnel satisfaction in different profiles

After dividing the organizations into four different profiles, analysis of variance was carried out between suitable
profiles to find out the differences in critical variables creating personnel satisfaction. Differences between Profiles 1
and 3 and 2 and 4 were analyzed, because between those profiles personnel satisfaction rate change but the overall
PM score stays at the same level. Between other maturity profiles PM score changes and for this reason there can be
significant differences in every critical variable. It should be noticed that these maturity profiles are not sequential (i.e.
an organization does not have to progress from profile one trough four in order). The results of the analysis of
variance are shown in Table VII. Numbers of variables correspond with question numbers of the survey presented ;in
Appendix.
In view of the survey results and analysis of variance, it can be seen that variables management support to PM,
availability of measurement information and analysis of the current situation in strategic planning are the only ones
that have statistically different averages between both maturity Profiles 1 and 3 and 2 and 4. Thus these variables
seem to create satisfaction (or their absent creates dissatisfaction) on both PM score levels (below and above
average). Management support is a commonly known factor crucial in ensuring sufficient resources and importance
of constant PM development in the organization (Bourne et al., 2002). Measurement information availability can be
linked especially to the discussion on IS and PM (Marchand and Raymond, 2008). It clearly seems that much more
should be done to improve PI reporting and to offer easier and more centralized access to PI. Lack of analysis of the
current situation in strategic analysis can characterize the ex-post nature of PI usage. However, it can also be an
embodiment from an inaccurately working PMS. If PMS does not create usable information, the usage may be
restricted. However, this cannot be verified without further research.
Between Profiles 2 and 4 it can be examined what variables create satisfaction when PM score is above the average
level. Most noticeable observation is the importance of IS supporting performance measurement perspective.
Deficient IS systems seem to be a key reason for dissatisfaction on high PM score level. This may mean that when
basic requirements of PM are satisfied, employees start to require more from IS: reports should be automated and
data should be stored in one place. Other variables having a significant difference between these profiles are
reliability of measurement information and competence management and learning promotion. It can be interpreted
that measurement information reliability becomes an issue, when scope and detail of measurement systems
increases. For example, different units of large organizations may have inconsistent registration practices
(Jskelinen and Sillanp, 2013). Intellectual capital appears to be still a relatively new aspect in PM which seems
to be best captured by the organizations in Profile 4. The importance of intellectual capital (e.g. Kaplan and Norton,
2004) and IS in automating measurement systems (e.g. Marr and Neely, 2001) reflects well the changes currently
going on in PM development.
Between Profiles 1 and 3 can be examined what variables create satisfaction when PM score is below the average
level. Definition of measurement specifications is a basic task in PM development which has been known for a long
time (e.g. Neely et al., 1995). Measurement specifications ensure that every measure have clear definitions,
purposes and responsibilities. In light of this observation, it can be argued that definition of measurement
specifications is the first aspect to be improved when creating more satisfaction toward performance measurement.
The other observation characterizing the differences between Profiles 1 and 3 is more difficult to interpret. It seems
that at the basic level of PM practices, satisfaction is increased when measurement is used in setting strategic
targets. However, this is not a distinguishing factor between Profiles 2 and 4 as, according to the results, these
profiles have already advanced in this aspect.
6. Conclusions
Section:
Choose

This paper presented a new model and evaluation tool for assessing the maturity of PM. The model presented in this
study positions itself between the highly complex and extensive survey instruments and the pragmatic models
presented by consultants. The novelty value of the model relates to the combination of variables extracted from
existing literature, written criteria in maturity levels and maturity profiles created from empirical data which are linked
to variables creating personnel satisfaction. Furthermore, the initial experiences and respondent comments indicate
that the model can be applied in many different organizational contexts.
A key starting point for this research was the lack of research regarding PI usage. This was also presumed as a
practical challenge. However, it seems that performance measurement practices lack behind the academic
discussion. The results from the testing of the model indicate that there still are challenges in the basic technical
measurement aspects. For example, measurement specifications are not properly defined. Also IS cause many
difficulties to performance measurement meaning, e.g. limited availability of information. The implementation of

performance measurement is complicated by limited managerial support. The results also reveal that PI does not
support strategic planning.
The academic contribution of this paper can be assessed from the perspective of the research gaps identified. First,
the model proposed is one of the first models to utilize extensive survey data and quantitative methods in the
construction of distinctive maturity profiles. It provides an overview of the prevailing PM practices which can support
in filling the gap between research and practice. Second, the resulting model is argued to stand out from the existing
literature by providing the balance between rigor and relevance desired in design science research. While the
empirical tests suggest a reasonable benefit-burden ratio, the transparent research process should also demonstrate
the criteria of academic rigor.
For practitioners, this model offers an easy-to-implement tool that can be used in self-assessment. The model has
been proven to have good problem-solving power and to work well as a basis for development. In addition, the
empirical data collected for the model described in this paper offer practitioners an easy way to position their own
organization in relation to other organizations. Testing of the model also yielded specific variables to concentrate on
when better personnel satisfaction with PM is aspired to.
Like all empirical research, this paper also has its limitations. The model itself aims to be widely applicable but its
testing was limited. First of all, the respondent group used in this paper was limited to Finland, and is therefore
country specific. However, there were different industries represented which improves external validity of the results.
Also, the selection of the organizations to test the model might be biased as it is based on a non-random sample and
the number of respondents could have been larger per organization to further reduce subjectivity. Also the presented
model and survey tool can be further improved. For example, satisfaction could have been surveyed on more exact
level (e.g. on a six-step scale in both questions) to increase sensitivity in this measure.
More specific guidance for organizations on how to effectively apply the results of the model in development work
should have been given. This would have required more research and case-specific studies. Second, to further
improve the relevance of the model, industry-specific parts should be developed. This way the core of the model
would remain the same but the particularities of different fields of business would be captured in full.

References
1.
Aho, M. (2012), What is your PMI? A model for assessing the maturity of performance management in
organizations, PMA 2012 Conference, Cambridge, UK, July 11-13.

2.
Andriessen, D. (2004), Reconciling the rigor-relevance dilemma in intellectual capital research, The
Learning Organization , Vol. 11 Nos 4/5, pp. 393-401. [Abstract] [Infotrieve]

3.
Argyris, C. and Schn, D. (1978), Organizational Learning: A Theory of Action Perspective , Addison
Wesley, Reading, MA.

4.
Balanced Scorecard Institute (2010), The strategic management maturity modelTM, available
at: www.balancedscorecard.org/Portals/0/PDF/BSCIStrategicManagementMaturityModel.pdf (accessed
June 25, 2012).

5.
Bititci, U. , Garengo, P. and Ates, A. (2012), Towards a maturity model for performance and management,
PMA 2012 Conference, Cambridge, UK, July 11-13.

6.
Bititci, U.S. , Ackermann, F. , Ates, A. , Davies, J. , Garengo, P. , Gibb, S. , MacBryde, J. , Mackay,
D. ,Maguire, C. and Van Der Meer, R. (2011), Managerial processes: business process that sustain
performance, International Journal of Operations & Production Management , Vol. 31 No. 8, pp. 851-891.
[Abstract], [ISI] [Infotrieve]

7.
Bititci, U.S. , Carrie, A.S. and McDevitt, L. (1997), Integrated performance measurement systems: an audit
and development guide, The TQM Magazine , Vol. 9 No. 1, pp. 46-53. [Abstract] [Infotrieve]

8.
Bourne, M. , Kennerley, M. and Franco-Santos, M. (2005), Managing through measures: a study of impact
on performance, Journal of Manufacturing Technology Management , Vol. 16 No. 4, pp. 373395. [Abstract] [Infotrieve]

9.
Bourne, M. , Neely, A. , Platts, K. and Mills, J. (2002), The success and failure of performance
measurement initiatives: perceptions of participating managers, International Journal of Operations &
Production Management , Vol. 22 No. 11, pp. 1288-1310. [Abstract], [ISI] [Infotrieve]

10.
Brudan, A. (2009), Assessing organizational performance management capability the performance
management maturity model, available at: www.smartkpis.com/blog/tag/performance-managementmaturity-model/ (accessed June 25, 2012).

11.
Bryman, A. and Bell, E. (2007), Business Research Methods , Oxford University Press, New York, NY.

12.
Cocca, P. and Alberti, M. (2010), A framework to assess performance measurement systems in
SMEs,International Journal of Productivity and Performance Management , Vol. 59 No. 2, pp. 186200. [Abstract] [Infotrieve]

13.
de Bruin, T. , Rosemann, M. , Freeze, R. and Kulkarni, U. (2005), Understanding the main phases of
developing a maturity assessment model, Proceedings of 16th Australasian Conference on Information
Systems (ACIS), Sydney, November 30-December 2.

14.
de Waal, A. , Kourtit, K. and Nijkamp, P. (2009), The relationship between the level of completeness of a
strategic performance management system and perceived advantages and disadvantages,International
Journal of Operations & Production Management , Vol. 29 No. 12, pp. 1242-1265.[Abstract], [ISI] [Infotrieve]

15.
Dresner, H. (2010), About the performance culture maturity modelTM, available
at: https://sites.google.com/site/performanceculturesite/home/more-on-the-pcmm (accessed June 25, 2012).

16.
Emory, C. (1985), The Irwin series in information and decision sciences, Business Research Methods , 3rd
ed., Homewood, Chicago, IL.

17.
Evans, J.R. (2004), An exploratory study of performance measurement systems and relationships with
performance results, Journal of Operations Management , Vol. 22 No. 3, pp. 219232. [CrossRef], [ISI] [Infotrieve]

18.
Ferreira, A. and Otley, D. (2009), The design and use of performance management systems: an extended
framework for analysis, Management Accounting Research , Vol. 20 No. 4, pp. 263282. [CrossRef] [Infotrieve]

19.
Franco-Santos, M. , Lucianetti, L. and Bourne, M. (2012), Contemporary performance measurement
systems: a review of their consequences and a framework for research, Management Accounting
Research , Vol. 23 No. 2, pp. 79-119. [CrossRef] [Infotrieve]

20.
Garengo, P. , Biazzo, S. and Bititci, U.S. (2005), Performance measurement systems in SMEs: a review for
a research agenda, International Journal of Management Reviews , Vol. 7 No. 1, pp. 2547. [CrossRef], [ISI] [Infotrieve]

21.
Gelderman, M. (1998), The relation between user satisfaction, usage of information systems and
performance, Information & Management , Vol. 34 No. 1, pp. 11-18. [CrossRef], [ISI] [Infotrieve]

22.

Gibson, C.E and Nolan, R.L. (1974), Managing the four stages of EDP growth, Harvard Business Review ,
Vol. 27 No. 1, pp. 76-88. [Infotrieve]

23.
Hatry, H.P. (2006), Performance measurement: getting results, 2nd ed., Urban Institiute Press,Washington,
DC.

24.
Homburg, C. , Artz, M. and Wieseke, J. (2012), Marketing performance measurement systems: does
comprehensiveness really improve performance?, Journal of Marketing , Vol. 76 No. 3, pp. 5677. [CrossRef], [ISI] [Infotrieve]

25.
Ittner, C.D. , Larcker, D.F. and Randall, T. (2003), Performance implications of strategic performance
measurement in financial services firms, Accounting, Organizations and Society , Vol. 28 No. 7, pp. 715741. [CrossRef], [ISI] [Infotrieve]

26.
Jskelinen, A. and Sillanp, V. (2013), Overcoming challenges in the implementation of performance
measurement: case studies in public welfare services, International Journal of Public Sector Management ,
Vol. 26 No. 6, pp. 440-454. [Abstract] [Infotrieve]

27.
Kaplan, R.S. and Norton, D.P. (2004), Masuring the strategic readiness of intangible assets, Harvard
Business Review , Vol. 82 No. 2, pp. 52-63. [ISI] [Infotrieve]

28.
Kennerley, M. and Neely, A. (2002), A framework of the factors affecting the evolution of performance
measurement systems, International Journal Of Operations & Production Management , Vol. 22 No. 11,
pp. 1222-1245. [Abstract], [ISI] [Infotrieve]

29.
Lts, K. , Haldma, T. and Mller, K. (2011), Performance measurement patterns in service companies: an
empirical study on Estonian service companies, Baltic Journal of Management , Vol. 6 No. 3, pp.357377. [Abstract] [Infotrieve]

30.
Maier, A.M. , Eckert, C.M. and Clarkson, J.P. (2006), Identifying requirements for communication support: a
maturity grid-inspired approach, Expert Systems with Applications , Vol. 31 No. 4, pp. 663672. [CrossRef], [ISI] [Infotrieve]

31.
Maier, A.M. , Moultrie, J. and Clarkson, P. (2012), Assessing organizational capabilities: reviewing and
guiding the development of maturity grids, Engineering Management, IEEE Transactions On , Vol. 59 No. 1,
pp. 138-159. [CrossRef], [ISI] [Infotrieve]

32.
Malmi, T. and Brown, D.A. (2008), Management control systems as a package opportunities, challenges
and research directions, Management Accounting Research , Vol. 19 No. 4, pp. 287300. [CrossRef] [Infotrieve]

33.
Marchand, M. and Raymond, L. (2008), Researching performance measurement systems: an information
systems perspective, International Journal of Operations & Production Management , Vol. 28 No. 7,
pp. 663-686. [Abstract], [ISI] [Infotrieve]

34.
Marr, B. and Neely, A. (2001), Organisational performance measurement in the emerging digital
age,International Journal of Business Performance Management , Vol. 3 No. 2, pp. 191215. [CrossRef] [Infotrieve]

35.
Marx, F. , Wortmann, F. and Mayer, J.H. (2012), A maturity model for management control
systems,Business & Information Systems Engineering , Vol. 4 No. 4, pp. 193-207. [CrossRef] [Infotrieve]

36.
Matta, K.F. (1989), A goal-oriented productivity index for manufacturing systems, International Journal of
Operations & Production Management , Vol. 9 No. 4, pp. 66-76. [Abstract] [Infotrieve]

37.
Najmi, M. , Rigas, J. and Fan, I. (2005), A framework to review performance measurement
systems,Business Process Management Journal , Vol. 11 No. 2, pp. 109-122. [Abstract] [Infotrieve]

38.
Neely, A. , Gregory, M. and Platts, K. (1995), Performance measurement system design: a literature review
and research agenda, International Journal of Operations & Production Management , Vol. 15 No. 4,
pp. 80-116. [Abstract], [ISI] [Infotrieve]

39.
Neely, A. , Mills, J. , Platts, K. , Richards, H. , Gregory, M. , Bourne, M. and Kennerley, M. (2000),
Performance measurement system design: developing and testing a process-based
approach,International Journal of Operations & Production Management , Vol. 20 No. 10, pp. 1119-1145.
[Abstract], [ISI] [Infotrieve]

40.
Neely, A. , Richards, H. , Mills, J. , Platts, K. and Bourne, M. (1997), Designing performance measures: a
structured approach, International journal of Operations & Production Management , Vol. 17 No. 11,
pp.1131-1152. [Abstract], [ISI] [Infotrieve]

41.
Nudurupati, S.S. , Bititci, U.S. , Kumar, V. and Chan, F.T. (2011), State of the art literature review on
performance measurement, Computers & Industrial Engineering , Vol. 60 No. 2, pp. 279290. [CrossRef], [ISI] [Infotrieve]

42.
Otley, D.T. and Berry, A.J. (1980), Control, organisation and accounting, Accounting, Organizations and
Society , Vol. 5 No. 2, pp. 231-244. [CrossRef] [Infotrieve]

43.
Podsakoff, P.M. and Organ, D.W. (1986), Self-reports in organizational research: problems and
prospects, Journal of Management , Vol. 12 No. 4, pp. 531-544. [CrossRef], [ISI] [Infotrieve]

44.
Royce, W. (1999), Software Project Management , Pearson Education, Addison Wesley, Reading.

45.

Salleh, N.A.M. , Jusoh, R. and Isa, C.R. (2010), Relationship between information systems sophistication
and performance measurement, Industrial Management & Data Systems , Vol. 110 No. 7, pp. 9931017. [Abstract], [ISI] [Infotrieve]

46.
Schlfke, M. , Silvi, R. and Mller, K. (2013), A framework for business analytics in performance
management, International Journal of Productivity and Performance Management , Vol. 62 No. 1, pp.110122. [Abstract] [Infotrieve]

47.
Speckbacher, G. , Bischof, J. and Pfeiffer, T. (2003), A descriptive analysis on the implementation of
balanced scorecards in German-speaking countries, Management Accounting Research , Vol. 14 No. 4,
pp. 361-388. [CrossRef] [Infotrieve]

48.
Stivers, B.P. , Covin, T.J. , Hall, N.G. and Smalt, S. (1998), How nonfinancial performance measures are
used, Management Accounting , Vol. 79 No. 4, pp. 44-48. [Infotrieve]

49.
Tangen, S. (2005), Demystifying productivity and performance, International Journal of Productivity and
Performance Management , Vol. 54 No. 1, pp. 34-46. [Abstract] [Infotrieve]

50.
Tapinos, E. , Dyson, R. and Meadows, M. (2005), The impact of performance measurement in strategic
planning, International Journal of Productivity and Performance Management , Vol. 54, Nos 5/6, pp.370384. [Abstract] [Infotrieve]

51.
Tung, A. , Baird, K. and Schoch, H.P. (2011), Factors influencing the effectiveness of performance
measurement systems, International Journal of Operations & Production Management , Vol. 31 No. 12,
pp. 1287-1310. [Abstract], [ISI] [Infotrieve]

52.
Van Aken, E.M. , Letens, G. , Coleman, G.D. , Farris, J. and Van Goubergen, D. (2005), Assessing maturity
and effectiveness of enterprise performance measurement systems, International Journal of Productivity
and Performance Management , Vol. 54 Nos 5/6, pp. 400-418. [Abstract] [Infotrieve]

53.
Van Aken, J.E. (2007), Design science and organization development interventions aligning business and
humanistic values, The Journal Of Applied Behavioral Science , Vol. 43 No. 1, pp. 6788. [CrossRef] [Infotrieve]

54.
Wettstein, T. and Kueng, P.A. (2002), A maturity model for performance measure systems, in: Brebbia,
C. and Pascola, P. (Eds), Management Information Systems: GIS and Remote Sensing , WIT
Press,Southampton, pp. 113-122.

Appendix. Evaluation tool applying the model


Section:
Choose

There are four choices in each question. The first one describes undeveloped and the fourth describes a
sophisticated level of measurement practices. It is important to note that top level is not always the most appropriate
level in each organization. You should choose the description which best illustrates the status in your organization.
When going up in the evaluation scale all the aspects described at the lower levels must be fulfilled. When there is
more than one criterion in the description, all the criteria must be fulfilled in order to reach the level in question. Be as
realistic as possible and use your overall impression of your workplace.
Background Information

Industry sector (public, private)

Organizational size (employee number)

Working experience in current organization (years)

Work description (director, manager, expert)

Relationship to performance information (information user, report creator)

In this survey, measurement is linked to all quantitative information (e.g. employee satisfaction survey result, leadtimes, cost information) gathered from the organizational operations. More specifically, measurement information
refers to information supporting managerial needs.
A. Performance measurement practices
(1) Scope of measurement

Measurement is based solely on annual financial statements.

Measurement is limited to financial measures and top organizational level.

Measurement is linked to operative level and it includes some non-financial measures (e.g. employee
satisfaction survey).

Measurement reaches operative processes (e.g. customer satisfaction to the delivery times of individual
products) and has an optimal balance of financial and non-financial measures. The measures used are
linked to the needs of different stakeholders.

(2) Causal relationships between measurement objects

Linkages between measurement objects have not been analyzed.

Linkages between measurement objects are discussed.

Factors explaining the main measurement results are partially identified.

Linkages between measurement objects are analyzed and modeled (e.g. strategy map). There is a common
understanding in the organization regarding the factors that should be improved in order to affect the main
measurement results.

(3) Reliability of measurement information

Decision makers do not trust the measurement information.

There are several interpretations of the measurement information. Personnel do not trust the measurement
information.

There are different interpretations on some part of the measurement information. Decision-makers trust the
measurement information.

Measures provide mainly unambiguous information. Personnel trust the measurement information.

(4) Measures aligned with strategy

Strategic objectives are not taken into account in defining measures.

Strategic objectives are discussed in defining measures.

Measures are defined based on strategic objectives.

Measures are defined to provide proactive information supporting the reaching of strategic objectives.

(5) Definition of measurement specifications


Measurement specification means that each measure has a systematically and unambiguously defined purpose,
person responsible, formula, data source and measurement frequency.

Measurement specifications are not defined.

Measurement specifications have been discussed but they are not documented.

Measurement specifications are partially defined.

All the measures have specifications which are controlled.

(6) Process for reviewing and updating measures

New measures are not taken into use.

New measures are taken into use in a random manner.

New measures are taken into use when needed but the usefulness of the old measures is not evaluated.

There is a regular evaluation and development of measures. Old measures are discarded when necessary.

B. IS supporting performance measurement


(7) IS in measurement information gathering

Measurement information is gathered manually when needed.

Measurement information is gathered manually to a large extent. Only financial measurement information is
gathered automatically.

Most of the measurement information is gathered with IS systems which enable the provision of real-time
measurement information.

Measurement information is gathered automatically and stored centrally. The most important IS systems
communicate which each other.

(8) IS in reporting measurement information

Measurement information is not analyzed with IS.

The analysis and reporting of measurement information is carried out with office software (word processing,
spreadsheets) when needed.

Measurement information is analyzed and reported with simple and purpose-build tools such as spreadsheet
models and macros. Visualization is used in refining measurement information.

Measurement information is analyzed and reported with purpose-build programs. Planning and decisionmaking is supported with the visualization of measurement information.

(9) Availability of measurement information

There can be measurement information available but only few know where.

Measurement information is available in separate sources.

Measurement information is centrally available but it is difficult to obtain.

Measurement information is easily and centrally available.

(10a) How satisfied are you with the performance measurement practices and systems in your organization?

Very dissatisfied

Dissatisfied

Satisfied

Very satisfied

(10b) Why are you satisfied or dissatisfied?

Open-ended question

C. Communications and commitment

(11) Personnel commitment

Personnel regard measurement as an extra burden.

There is no wide criticism toward measurement among personnel.

Measurement is regarded as useful. The views of personnel are taken into account when developing
measurement.

Work community feels that measurement improves fairness. Personnel initiate measurement improvement
efforts.

(12) Management support

Measurement has no management support.

Top management supports measurement.

Managers regard measurement as important and employees are encouraged toward measurement.

Sufficient resources and education are provided to implement measurement.

(13) Communicating measurement information to the personnel


Communication refers to all activity aiming at the dissemination of measurement information to the personnel. This
includes both active (describing the results) and passive (provision of access to information) communication.

Measurement results are not passed on to the personnel.

Personnel obtain relevant measurement information in a random manner. Personnel do not know the targets
of measures related to them.

Personnel obtain frequently measurement information which is related to them. Supervisors know the
targets of measures relevant in their administration.

Measurement results relevant to the personnel are communicated interactively. All personnel know the
measurement targets linked to them.

(14) Communicating measurement information to the most important stakeholders


In this question, measurement information refers to other than legally mandatory reports such as financial statement.
Most important stakeholders refers, e.g. to owners, customers, political decision-makers and investors.

Measurement results are not communicated outside organizational boundaries.

Measurement results are communicated to key stakeholders at random.

Measurement results are regularly communicated to the key stakeholders but in a non-systematic way.

Measurement results are regularly communicated to the key stakeholders with a pre-determined reporting
template.

D. Planning and strategy


(15) Analysis of the current situation in strategic planning

Measurement information is not utilized in the analysis of the current situation.

Measurement information is acknowledged in the analysis of the current situation.

Measurement information adds value to the analysis of the current situation.

Current situation is analyzed systematically based on measurement information.

(16) Setting strategic targets

Strategic targets are set without measurement information.

Measurement results from the previous years are acknowledged in setting strategic targets.

Strategic targets are based on measurement information.

Measurement information is used both in setting strategic targets and in questioning earlier strategic
decisions.

(17) Defining action plans

Measures are not used in defining objects for development.

Measures are used in the identification of development objects (e.g. identification of bottlenecks in the
production process)

Measures are used to support the preparation of action plans (e.g. prioritizing development objects)

Definition and implementation of action plans are done systematically and mainly based on measurement
information (e.g. action plans are prioritized and controlled with the support of measurement information)

E. Leadership and management


(18) Resource allocation
Resources refers, e.g. to employees, working hours and monetary resources.

Resource allocation is not followed with the support of measurement information.

Resource usage is supported with measurement information (e.g. personnel engaged in a certain project).

Resource sharing is supported with measurement information (e.g. decisions regarding personnel training).

Decisions on resource allocation (e.g. budgeting) are made based on measurement information.

(19) Competence management and learning promotion

Measures are not linked to personnel competence.

Measurement information is used to identify competencies (e.g. results of appraisal interviews).

Personnel competencies are constantly monitored (e.g. self-evaluations) and decisions supporting learning
are carried out based on measurement information.

Development targets are identified based on measurement information and personnel are provided with
individual development plans.

(20) Benchmarking

Measurement information cannot be used in benchmarking.

Measurement information is used in benchmarking internal units.

Measurement information is used in external benchmarking.

Measurement information is systematically used as a support for benchmarking.

(21) Scanning external environment

There is no measurement information from outside the home organization.

Measurement information is used in the analysis of customers (e.g. identifying sales potential by analyzing
the turnover of customers).

Measurement information is used in analyzing other external stakeholders (e.g. identification of market
potential of new products).

Measurement information is used as a basis of communication with key external stakeholders. (e.g.
optimizing the supply chain performance and customer value with measurement information).

(22) Rewarding and performance information

Rewarding is not linked to measurement information.

Rewarding is linked to organizational-level measurement information.

There is a clear linkage between rewarding principles and unit level measurement targets.

There is a clear linkage between rewarding principles and personal level measurement targets.

(23a) How satisfied you are with the usage of measurement information in your organization?

Very dissatisfied

Dissatisfied

Satisfied

Very satisfied

(23b) Why are you satisfied or dissatisfied?

Open-ended question

About the authors


Section:
Choose

Dr Aki Jskelinen works as a Research Fellow on the Performance Management Team at Tampere University of
Technology, Finland. His research interests focus on performance measurement and management especially in

service operations. He has also participated in many development projects related to performance management in
Finnish organizations. Dr Aki Jskelinen is the corresponding author and can be contacted at:
aki.jaaskelainen@tut.fi
Juho-Matias Roitto (MSc) works as a Project Researcher on Performance Management Team at Tampere University
of Technology, Finland. His special interest area lies within the better utilization of existing performance information in
organizations.
Acknowledgments:
This study was conducted in a research project funded by Finnish Work Environment Fund. The authors are
thankful to Dr Paula Kujansivu for her efforts in the beginning of the project supporting the first ideas of this
paper.

You might also like