You are on page 1of 20

bs_bs_banner

International Journal of Training and Development 19:3


ISSN 1360-3736
doi: 10.1111/ijtd.12055

Delivering training strategies:


the balanced scorecard at work

Stefano Baraldi and Antonella Cifalin

Aligning the value of training to organizational goals is an


emerging need in human resource management. This study,
aiming at expanding the research on training evaluation from a
strategic management perspective, examines whether the use of
the Balanced Scorecard approach can enable an effective deliv-
ery of training strategies, thus strengthening the link between
training and organizational goals. The research was based on
action research methodology. Researchers worked for about 12
months with three healthcare organizations. The research find-
ings indicate that the balanced scorecard: (1) allows visualiza-
tion of a clearly focused and internally consistent map of
cause-and-effect relationships, turning the functional training
efforts into strategic results; (2) effectively supports the train-
ing function both in managing training processes and in deliv-
ering targeted organizational outcomes; (3) offers a specific set
of critical measures for evaluating the training functions per-
formance; and (4) permits the fostering of a sound alignment
between training programme objectives and functional goals.
Various theoretical and practical implications are discussed.

Introduction
Over the last decades, the relevance of human resources (HRs) as an essential source of
business value has climbed up the management research agenda (e.g. Barney & Wright,
1998; Delery & Doty, 1996; Huselid & Becker, 2011; Mollick, 2012; Nyberg et al., 2014;
Quinn et al., 1996; Ulrich, 1989, 1997, 2005; Fulmer & Ployhart, 2014). Most of this
research has been guided by the resource-based view of the firm, arguing that sus-
tained competitive advantage derives from the resources and capabilities a firm con-
trols that are valuable, rare, imperfectly imitable and not substitutable (Barney, 1991;
Barney et al., 2001; Wright et al., 1994).

Stefano Baraldi, Full professor in Management Accounting, Universit Cattolica del Sacro Cuore,
Milan, Italy. Email: stefano.baraldi@unicatt.it. Antonella Cifalin, Adjunct professor in Management
Accounting, Universit Cattolica del Sacro Cuore, Milan, Italy. Email: antonella.cifalino@unicatt.it

2015 John Wiley & Sons Ltd.

Delivering training strategies 179


The diffusion of the resource-based perspective has made important contributions in
the field of human resource management (HRM) (Wright & McMahan, 1992; Wright
et al., 2001). The emphasis on people as strategic resources has contributed to the
interaction and convergence of strategy and HRM issues within the theoretical and
empirical research literature (Barney et al., 2001; Nyberg et al., 2014). On the one hand,
scholars have explained why and how organizations achieve their goals through the
use of HRM systems (e.g. Huselid et al., 2005; Jiang et al., 2012; Paul & Anantharaman,
2003; Storey, 1995; Wall & Wood, 2005; Wright et al., 2005). On the other hand, literature
reviews highlight a burgeoning body of research testing various organizational out-
comes associated with the use of HRM systems (e.g. Fulmer & Ployhart, 2014; Jiang
et al., 2012; Nyberg et al., 2014; Patel et al., 2013; Paul & Anantharaman, 2003; Wright
et al., 2001).
Despite these theoretical and empirical developments, important issues remain
regarding the mechanisms through which HRM systems contribute to organizational
effectiveness (Jiang et al., 2012; Prowse & Prowse, 2010). First, one current conceptual
trend concerns developing a more complete and comprehensive causal model linking
HRM systems with firm performance than is usually explained within the literature
(Huselid & Becker, 2011; Jiang et al., 2012; Nyberg et al., 2014; Wright et al., 2001).
Second, it remains unclear how HRM systems relate to different organizational out-
comes that range from very proximal (i.e. HR outcomes such as employee engagement
and commitment) to more distal (i.e. operational and financial outcomes) (Huselid &
Becker, 2011; Jiang et al., 2012; Paul & Anantharaman, 2003; Prowse & Prowse, 2010).
In an effort to bridge these gaps, some scholars have suggested the development of
multilevel conceptual constructs considering the impact of HRM systems at both indi-
vidual and collective levels of analysis (e.g. Fulmer & Ployhart, 2014; Nyberg et al.,
2014; Ployhart & Moliterno, 2011). According to Huselid and Becker (2011), this chal-
lenge would benefit from a fuller integration of the micro (focused on designing HRM
systems as well as individual employee responses to those systems) and macro
(focused on strategy formulation and implementation processes) domains of the HRM
literature. Put differently, a clearer articulation of the black box between HRM and
firm performance requires a new emphasis on integrating strategy implementation as
the central mediating variable in the HRfirm performance relationship (Becker &
Huselid, 2006). Related to this point is Bowells field research (2006), arguing that
strategy implementation requires development of a line of sight among employees,
conceptualized as an employees understanding of an organizations objectives and
how to contribute to those objectives.
More specific to training, a burgeoning body of research has analysed training
effectiveness (for reviews, see Aguinis & Kraiger, 2009; Salas et al., 2012; Salas &
Cannon-Bowers, 2001; Tannenbaum & Yukl, 1992). As suggested by Saks and
Burke-Smalley (2014), two streams of training research have addressed this topic,
respectively labelled as macro- and micro-training research. The macro-training
research can be found in the strategic human resource literature focusing on training
outcomes at the organizational level of analysis. Scholars rooted in this discipline have
developed theoretical and empirical research on the training firm performance rela-
tionship (e.g. Aguinis & Kraiger, 2009; Aragn-Sanchez et al., 2003; Becker et al., 2001;
Tharenou et al., 2007). The micro-training research can be found mostly in the
industrial/organizational psychology literature focusing on training outcomes at the
individual level of analysis. Scholars rooted in this literature have investigated transfer
of training issues in work organizations (for more recent reviews, see: Baldwin et al.,
2009; Blume et al., 2010; Burke & Hutchins, 2008; Grossman & Salas, 2011).
Despite these theoretical and empirical developments, significant research gaps
remain. First, the macro- and micro-training research streams are fragmented into two
independent, even though related, perspectives. As argued by Saks and Burke-Smalley
(2014, p. 105), because these two streams of research exist independently and involve
different levels of analysis, there has been little attempt to integrate them. Second,
research regarding organizational-level benefits has not been as abundant as the lit-
erature on individual- and team-level benefits (Aguinis & Kraiger, 2009; Kozlowski

180 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
et al., 2000; Tharenou et al., 2007). Third, in spite of a general support for a positive
relationship between various indicators of training provision and firm performance at
the organizational level of analysis, a research gap remains about the mechanisms that
might explain or mediate the training firm performance relationship (Aguinis &
Kraiger, 2009; Aragn-Sanchez et al., 2003; Saks & Burke-Smalley, 2014; Tharenou et al.,
2007).
In response to these issues, scholars have advocated multilevel research to gain a
thorough insight into the integration of training evaluation at different levels of analysis
(e.g. Fulmer & Ployhart, 2014; Kozlowski et al., 2000; Ployhart & Moliterno, 2011; Saks
& Burke-Smalley, 2014; Tharenou et al., 2007). As such, a critical concern is about
investigating the processes that facilitate a smooth vertical transfer of training, that is to
say the processes by which learning outcomes at lower levels (e.g. individual-level
outcomes) aggregate to influence post-training outcomes at higher levels of analysis
(e.g. team- and organizational-level outcomes) (Aguinis & Kraiger, 2009; Chen &
Klimoski, 2007; Kozlowski et al., 2000; Saks & Burke-Smalley, 2014).
In the effort of bridging macro- and micro-training research, two strategic issues
stand out. First, before training, scholars suggest an expansion of the traditional com-
ponents of training needs analysis (e.g. job-task analysis and person analysis) and a
focus on strategic alignment at the organizational level of analysis (Salas et al., 2012).
According to Tannenbaum (2002), aligning training with the organizations strategic
direction involves examining key business objectives and challenges, identifying
the functions and jobs that most influence organizational success, clarifying the
most critical organizational competencies and establishing overall strategic learning
imperatives.
Second, after training, scholars advocate the need to move beyond traditional evalu-
ation methodologies in order to embed training interventions within organizational
systems (Glaveli & Karassavidou, 2011; Kirkpatrick & Kirkpatrick, 2006; Kraiger, 2002;
Kraiger et al., 2004; Pangarkar & Kirkwood, 2008). As suggested by Brinkerhoff (2006),
it needs to shift the focus from the training programme evaluation to the evaluation of
how effectively the organization uses training systems and resources and leverages
them into improved performance that in turn drives business results. As such, this shift
points to the need of aligning training efforts within an organizations performance
management systems (Aguinis, 2013; Brinkerhoff, 2006).
Despite the need to link training to the organizations strategic direction, research on
training evaluation has mostly focused on an operational approach, aimed at improving
the quality of the processes by which a training programme is decided, designed and
delivered (namely, to do things right). In brief, conventional training evaluation can
justify the financial input made, serve for quality management purposes, provide
feedback to human resource departments and trainers for improving training courses
and help to make more accurate decisions about the continuation of training courses
(Grohmann & Kauffeld, 2013, p. 137). Over the last 20 years, however, the rise of a new
generation of performance measurement systems, such as the Balanced Scorecard
(BSC; Kaplan & Norton, 1996), has paved the way for a strategic approach, aimed at
achieving the greatest impact on organizational performance from training initiatives
(namely, to do the right things). According to this approach, it is crucial to clearly identify
and develop training initiatives leading to the availability of employee skills, talent
and know-how required to support the organizational strategy (Kaplan & Norton,
2004, p. 49).
Given this premise, the action research (AR) project presented in this paper aims at
finding out if the use of a strategic performance measurement (SPM) system, namely
the BSC, can actually enable an effective delivery of training strategies, thus linking
training with organizational goals. In our previous work on this subject (Baraldi &
Cifalin, 2009), we referred to a specific set of training programmes (TPs) and
addressed the issues related to the feasibility of the operational and strategic
approaches to training evaluation as well as their mutual relationships. Our findings
confirmed that both of them are actionable for measuring the performance of TPs and
that their contextual use is mutually beneficial. Besides, we found that the BSC proved

Delivering training strategies 181


2015 John Wiley & Sons Ltd.
to be very effective in that its structure adapted well to the specific features of TPs.
Given the explorative nature of the study, however, we did not obtain any evidence
about how the BSC could improve the evaluation of the function that is usually in
charge of training.
This paper, therefore, aims at filling this gap by focusing on a different unit of
analysis: not single TPs, but the whole set of initiatives or actions taken to deliver a
training strategy. By assuming that training strategy is an essential part of the
organizational strategy and that the training function is accountable for the training
strategy, we address the following questions:
1. Is a strategic approach namely, the use of the BSC really feasible and helpful for
an organization, helping to: (a) define and deliver its training strategy?; (b) evaluate
the performance of the training function?
2. Is it possible to effectively link the BSC of the training function with the BSC of the
TPs?
The paper develops the theoretical framework, copes with research methodology
issues and presents the empirical findings. The last section discusses the findings and
provides some final remarks about implications for researchers and practitioners,
limitations of the study and directions for further research.

Theoretical framework
Training evaluation
Starting from the seminal work by Kirkpatrick (1959a, 1959b, 1960a, 1960b), the training
literature includes many contributions on training evaluation based on hierarchical
models (for overviews, see Aragn-Sanchez et al., 2003; Arthur et al., 2003; Salas et al.,
2012; Salas & Cannon-Bowers, 2001; Tannenbaum & Yukl, 1992). Basically, the hierar-
chical approach outlines four categories (levels) to measure training effectiveness: (1)
the reactions of trainees towards TPs, such as enjoyment, perceived usefulness and
perceived difficulty (Warr & Bunce, 1995); (2) their learning, in terms of knowledge,
skill and/or attitude; (3) the degree of learning transfer into on-the-job behavioural
practices; and (4) the impact of this transfer on organizational results. Despite criticisms
regarding the limitations of the hierarchical approach to training evaluation as a com-
prehensive model (Alliger & Janak, 1989; Alliger et al., 1997; Bates, 2004), it has enjoyed
a widespread and enduring popularity in both academic and professional contexts (e.g.
Alvarez et al., 2004; Kennedy et al., 2014; Salas & Cannon-Bowers, 2001).
The most recent criticisms to the hierarchical approach are twofold. A first criticism
is about the deepness of its application in both research and practice. On the one hand,
even though the model suggests an evaluation framework at both individual and
organizational levels of analysis, most empirical research has focused primarily, if not
exclusively, on the former (Aguinis & Kraiger, 2009; Kozlowski et al., 2000; Tharenou
et al., 2007). On the other hand, various studies have demonstrated that training pro-
fessionals have acknowledged the relevance of conducting behaviour-based and
results-based evaluations, yet organizations do not frequently conduct them (for an
overview, see Kennedy et al., 2014). This is alarming because it has been demonstrated
that even though organizations are most likely to evaluate trainee reactions and learn-
ing, only behaviour and results criteria are significantly related to higher rates of
training transfer (Saks & Burke, 2012).
A second criticism is the lack of a strategic perspective when the hierarchical
approach is applied for managerial purposes. Indeed, hierarchical models are primarily
focused on the measurement of training effectiveness, whereas the key issue is not only
measuring the results achieved by TPs, but also managing the continuous alignment
of individual and team behaviour with organizational goals (Brinkerhoff, 2006;
Kirkpatrick & Kirkpatrick, 2005; Kraiger, 2002; Kraiger et al., 2004; Robinson &
Robinson, 1998; Salas et al., 2012; Tannenbaum, 2002).

182 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
The BSC and training performance
Over the past two decades, performance measurement studies have evolved to encom-
pass a more strategic approach (e.g. Berry et al., 2009; Chua, 2007; Ittner et al., 2003;
Skrbk & Tryggestad, 2010). Several authors, indeed, have recognized the need to pay
more attention to the neglected elements of strategy and operations and paved the way
for a new stream of literature about the so-called SPM systems (e.g. Bisbe &
Malagueo, 2012; Burney et al., 2009; Franco-Santos et al., 2012; Kolehmainen, 2010;
Micheli & Manzoni, 2010; Otley, 1999; Ferreira & Otley, 2009). SPM systems are aimed
at enabling the translation of strategy into a set of financial and non-financial measures
which are usually cascaded throughout the organization in order to enhance strategic
alignment (e.g. Kolehmainen, 2010). Therefore, their distinctive features are (e.g.
Chenhall, 2005; Franco-Santos et al., 2012; Gimbert et al., 2010) (1) the integration of
long-term strategy and operational goals; (2) the presence of performance indicators
covering different perspectives; (3) the provision of a sequence of goal-target-action
plans; and (4) the presence of explicit cause-and-effect linkages between goals and/or
performance indicators. Bento and White (2010) pointed out that the stream of litera-
ture related to SPM systems is likely to be in a mature phase, having progressed
through: (1) an early analysis of the technical issues arising in the selection of custom-
ized performance measures (the how-to phase); (2) an investigation of the variables
affecting the successful implementation of SPM systems (the what-else phase); and (3)
an assessment of their impact on business performance (the so-what phase).
Among SPM systems, Kaplan and Nortons BSC construct (Kaplan & Norton, 1996,
2001) is an essentially multidimensional approach to performance measurement and
management that is linked specifically to organizational strategy (Otley, 1999, p. 374).
When correctly understood and properly implemented, the BSC is supposed to
improve organizational performance by translating strategy into specific objectives and
measures along its four perspectives (financial, customer, internal business process,
and learning and growth). Since its first appearance in 1992 (Kaplan & Norton, 1992),
the BSC has gained a widespread acceptance by practitioners, has reached a well-
entrenched position within the management accounting teaching literature and has
attracted growing interest among academic researchers (for overviews, see: Hoque,
2014; Soderberg et al., 2010). Even though its benefits and limits are still under scrutiny
(e.g. Agostino & Arnaboldi, 2011; Chenhall, 2005; De Geuser et al., 2009; Northcott &
Taulapapa, 2012; Wiersma, 2009), the BSC is regarded as one of the most notable
examples of SPM systems.
Aimed at creating organizational alignment and focus (Kaplan & Norton, 2001), the
BSC emphasizes the role of HRs as a strategic lever. On the one hand, many strategic
themes concerning human and organizational capital skills, talents, knowledge,
culture, leadership, alignment, teamwork, etc. are mapped and measured at an
organizational level within the learning and growth perspective of the BSC (Kaplan &
Norton, 2004).
On the other hand, much attention is paid to cascading the BSC throughout the
organization, pointing out the contribution of the HR function to strategy implemen-
tation. By arguing that the field of HR assessment contains more promises than deliv-
ery, some authors call for the adoption of HR scorecards in order to track how suitably
HR initiatives are adapted to the organizational strategy and how much they actually
contribute to its execution (Becker et al., 2001; Huselid & Becker, 2011; Huselid et al.,
2005; Ulrich, 1997; Walker & MacDonald, 2001). Within these frameworks, however,
training performance still remains scattered across the financial perspective (i.e. cost
per trainee hour, percentage of payroll spent in training), the customer perspective
(percentage and number of employees involved in training, employees satisfaction
with TPs) and the internal processes perspective (number of courses, number of train-
ing days and programmes, percentage of new materials in TPs, percentage of delivered
training, percentage of training delivered on time). Therefore, the literature provides no
conclusive evidence about the further cascading of the BSC within the HR function,
whereas the feasibility of the BSC approach in evaluating TPs has been proved (Baraldi

Delivering training strategies 183


2015 John Wiley & Sons Ltd.
& Cifalin, 2009). As a consequence, we argue that this gap in the use of the BSC could
hinder the organizations capability of getting its training strategy properly defined
and executed.

Research methodology
The study presented in this paper is based on AR methodology. Deeply rooted in
pragmatism, AR originated from the work of Kurt Lewin in the mid-1940s (Lewin,
1946, 1947), was later advocated by many prominent organizational scientists and has
proved its potential in improving the practical pertinence of research (Argyris, 1993).
AR questions the capability of positivist science to cope with the complexity and the
uniqueness of the empirical context in management studies and, thus, to deliver a
general theory. Positivist researchers usually show little inclination to carry out inter-
disciplinary studies and tend to leave the practitioners (and their problems) with no
role other than that of willing subjects of their inquiries (Fendt & Kaminska-Labb,
2011).
Conversely, AR emphasizes the role of a researcher actively involved in some form
of engaged scholarship with the situation being studied (van de Ven & Johnson,
2006). Practitioners and researchers work together to solve practical problems and
generate scientific knowledge. Thus, the validation of the findings does not come from
the consistency of prediction and control, but from the knowledge that can be applied
and validated in action (Gummesson, 1991). Typically, AR studies lead scholars to get
involved in practical organizational work and take action to stimulate change (French
& Bell, 1984). The collaborative research for an applicable theory, its implementation
and the assessment of its impact on the organizational environment allows action
researchers to get back into the realm of academia with an enriched research agenda
and data set (French, 2009). On the one hand, this active role has been criticized
because of its potential lack of rigour. Closely involved with problem solving or change
processes like a consultant, the researcher may lose his/her independence because of
the need to satisfy clients expectations. On the other hand, AR provides the opportu-
nity to obtain very rich insights otherwise not accessible because practitioners are
involved in things which actually matter to them (Eden & Huxham, 1996), and turn out
to be more willing to share their ideas and information.
Moving from this pragmatism, AR has taken an increasingly wide range of forms
inspired by different philosophical stances (Cassel & Johnson, 2006). In our study, we
have followed an approach labelled as inductive AR. As argued by Cassel and Johnson
(2006, p. 793), theory in this particular approach is generated from the data and
concerns the development of thick descriptions of the patterns of subjective meanings
that organizational actors use to make sense of their worlds, rather than entailing the
testing of hypotheses deduced from a priori theory that causally explains what has been
observed by the action researcher. This usually occurs through interpretive under-
standing by seeking to inductively access research participants cultures, in their
natural contexts. As organizational change remains a key issue in this form of AR, the
aim is to reflexively engender single and double loop learning through the involvement
of organizational participants (Argyris & Schon, 1989). The action researcher retains a
pivotal expert role, in providing advice about, and encouraging through processual
interventions, the changes that necessarily need occur as an outcome of this interpre-
tive, yet diagnostic, process (Cassel & Johnson, 2006).
As such, inductive AR is not used to test a well-defined theory, but rather to elaborate
theory from practice (Coghlan, 2004; Jonsson & Lukka, 2006; Kaplan, 1998; Westbrook,
1995) when research aims at: (1) experimenting how different theoretical frameworks
can be actually used in the same organization; (2) evaluating their feasibility; (3) under-
standing the relationships between them; and (4) identifying and building a so-called
emergent theory.
Our study of training performance seems to fit quite well this setting. We have
already experimented both operational and SPM frameworks and evaluated their fea-
sibility at a programme level (Baraldi & Cifalin, 2009), but we have not found any

184 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
evidence about their use at a functional level. Moreover, we are interested in exploring
the relationships between the performance measurement processes which are focused
both on the TPs and on the training function.
Following the comprehensive definition of AR suggested by Susman and Evered
(1978), we structured our research process into five phases: (1) diagnosing (identifying
or defining a problem); (2) action planning (considering alternative courses of action for
solving a problem); (3) action taking (selecting a course of action); (4) evaluating
(studying the consequences of an action); and (5) specifying learning (identifying
general findings). Researchers and participants collaborated intensively in all the
phases.

Participants
The study was carried out in the healthcare sector for several reasons. First, healthcare
organizations (HCOs) are widely recognized as knowledge-intensive organizations
where training is an essential driver of competency development. In developed coun-
tries, including Italy, health professionals are required to attend programmes of con-
tinuing medical education. Tian et al. (2007) offer a systematic review of evaluation in
formal continuing education, pointing out that the effectiveness of TPs in the
healthcare sector is not systematically evaluated, particularly in terms of behavioural
changes in trainees and patient outcomes, and therefore underlined the importance of
developing further research on this subject. Likewise, McAlearney (2010) argues that,
in spite of the huge amount of money spent on training, HCOs are generally struggling
in their attempts to evaluate the return on their investments. Hence, the measurement
of training performance is growing critical in healthcare.
Second, HCOs are experiencing unprecedented pressure on their performance due
to the challenging environment they must cope with, where growing competition,
constraining regulations and resource scarcity require them to state proper goals and to
assign clear responsibilities for results (Lapsley, 2004). Thus, a multi-dimensional
system like the BSC seems to fit with the need to balance the relationships between
costs, quality, access and consumer choices properly (Inamdar et al., 2002). Indeed, the
literature widely reported both the introduction of the BSC fundamentals in HCOs and
its application in different healthcare settings such as hospitals, hospitals systems,
long-term care facilities, psychiatric centres and university departments (e.g. Chan &
Seaman, 2009; Chiang, 2009; Zelman et al., 2003).
Finally, AR is certainly well known by HCOs, and received an additional boost from
the extensive acceptance of the evidence-based medicine principles (Bate, 2000).
As the theoretical framework and the research questions deal with the feasibility of
a strategic approach to training function evaluation, we selected HCOs which explicitly
recognized training as a strategic driver of performance. In particular, we focused on
the scientific institutes for research and care (SIRCs) whose mission institutionally
includes training in addition to research and clinical activity. We excluded public SIRCs
because the literature cites many problems when performance measurement systems
are implemented in public organizations (e.g. Bouckaert & Peters, 2002).
Relying on a systematic method of case selection, we contacted all the 13 private
SIRCs located in Lombardy, an administrative region with almost 10 million inhabit-
ants (more than 15 per cent of the overall Italian population). In the preparatory phase
(Labro & Tuomela, 2003), we sent each HCO a written research proposal detailing the
research objectives and methodology. Three HCOs accepted our proposal. We then
carried out 2-h structured interviews with each training manager in order to verify
whether: (1) the training function really played a strategic role for improving
organizational performance; and (2) the HCO could actually share the purposes of our
research and commit enough time to its delivery.

Research project
We worked for about 12 months with the three HCOs. The first phase of the project
(diagnosing) took place through a plenary start-up meeting in order to: (1) share the

Delivering training strategies 185


2015 John Wiley & Sons Ltd.
projects aims and structure; (2) gain a good understanding of the HCOs approach to
managing training (Jonsson & Lukka, 2006); and (3) define team composition each
HCO joined the project with an active team made up of its training manager and three
assistants.
Four 1-day plenary meetings with participants were held to start up the action
planning phase, aimed at constructing a theoretically well-grounded solution idea.
During the meetings, a review of the literature was presented, and participants and
researchers designed a common frame of the training function BSC. Eventually, teams
recognized the logical validity of the BSC and planned how to apply it in each HCO.
In the next phase (action taking), the training function BSC was customized and
implemented in each HCO. Different pairs of researchers worked within each HCOs
team in order to mitigate the natural subjectivity of AR and reduce the personal bias in
the onsite work (Westbrook, 1995). To cope with reliability and validity issues, we
followed complementary research strategies (McKinnon, 1988): spending enough time
in the research setting, using multiple methods and multiple observations, and appro-
priate social behaviour while in the setting. On the whole, researchers spent from
20 to 30 days in each HCO. Our multiple observations included the analysis of the
organizational priorities and the subsequent training needs, the definition of the train-
ing plan, the training delivery, the training evaluation, and the delivery of support
following the training. We used a participative approach in both defining and testing
specific performance measures for the training function. Each team kept a detailed field
diary (Jonsson & Lukka, 2006) and formalized its experience in a final report (Baxter &
Chua, 2008).
Finally, during a closing 1-day plenary conference, the teams presented their find-
ings, discussed the outcomes achieved, developed a comparative analysis of their
experiences and fostered the incremental development of theory (evaluating and speci-
fying learning).

Findings
The training scorecard
During the action planning phase, participants and researchers developed a common
definition of the training scorecard. Following the BSC model (Kaplan & Norton,
2004), each team initially drew a strategy map emphasizing: (1) the link between
organizational and training strategies (i.e. how can the training function actually
contribute to organizational success?); and (2) the few critical issues (so-called key
performance areas, KPAs) underlying the training strategy and their cause-and-
effect relationships (i.e. how can the training function succeed in delivering its
strategy?).
All teams then shared a common version of their strategy maps and recognized its
general validity (see Figure 1). According to Kaplan and Norton (2004, p. 9), a strategy
map is a visual representation of the cause-and-effect relationships among the compo-
nents of a strategy, that is to say the few critical issues (namely, the KPAs) in the
different (but interconnected) perspectives of evaluation. Moving from the top down,
the following perspectives were identified:
an organizational perspective (is training actually contributing to business results?);
a trainee perspective (is training actually improving trainee performance?);
an internal process perspective (is training flawlessly delivered to trainees?); and
a learning and growth perspective (are the drivers for future training performance
properly nurtured?).
For each perspective, participants and researchers identified the few critical issues
(KPAs) to be addressed for a successful implementation of their training strategy,
namely:
Four KPAs were selected in the organizational perspective (i.e. clinical outcomes,
innovation, quality and productivity).

186 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
Figure 1: The functional strategy map.

Three KPAs were identified in the trainee perspective (i.e. satisfaction, learning and
behaviour).
Five KPAs were focused in the internal process perspective (i.e. creating consensus,
planning, delivering, evaluating and managing work environment).
Three KPAs were defined, as enabling factors, in the learning and growth perspec-
tive (i.e. competencies, technologies and partnerships).

Organizational perspective
As training functions are internal units providing shared services, they have to be
responsive to organizational strategies (Kaplan & Norton, 2001). Therefore, participants
and researchers felt the need to provide evidence within this perspective of the
organizational-level outcomes actually affected by TPs. HCOs identified four (notable)
types of business results concerning the organizational perspective:
clinical outcomes: increasing the survival of emergency department patients,
reducing hospital infections, controlling patients pain intensity, minimizing the
prevalence and incidence of surgical complications, etc.;
innovation: introducing new medical or surgical procedures, defining new clinical
pathways, re-engineering critical administrative processes, etc.;
quality: diffusing the treatment of hospital pains, applying clinical risk manage-
ment procedures, adopting clinical audits, etc.; and
productivity: improving the productivity of both clinical and administrative
operations, etc.

Trainee perspective
Participants and researchers acknowledged that training is for improving trainee per-
formance to fit the desired organizational goals. Three key performance areas were
identified as critical issues for trainees:

Delivering training strategies 187


2015 John Wiley & Sons Ltd.
achieving a high level of satisfaction in terms of immediate reactions, such as
enjoyment, perceived usefulness and perceived difficulty of training (satisfaction);
improving their knowledge, skills and attitudes (learning); and
influencing their actual on-the-job behaviour, such as applying new medical and
surgical procedures, using hospital pain scores, attending clinical audits, etc.
(behaviour).

Internal process perspective


Teams took on board the idea that an effective delivery of TPs relies upon a limited
number of critical processes. Accordingly, their perspective of the training BSC is that
it is meant to measure the performance primarily in terms of quality, timeliness and
costs achieved in the following processes (Arthur et al., 2003; Salas & Cannon-Bowers,
2001; Salas et al., 2012; Tannenbaum & Yukl, 1992):
creating an adequate level of consensus among the main stakeholders of TPs
(Kraiger et al., 2004): often enough, this process is vital in order to achieve desired
outcomes at both the organizational and trainee levels for example, in order to
introduce a clinical pathway, all the clinical directors should be strongly committed
to the TPs aimed at developing the competencies required to define, apply and
diffuse the pathway;
planning: effective training programme planning includes (Brinkerhoff, 2006;
Kraiger et al., 2004; Salas et al., 2012): (1) convincingly targeting organizational
outcomes; (2) carefully evaluating behavioural and learning objectives; and (3)
consistently identifying target participants, learning contents and methodology;
delivering the TPs according to plan;
evaluating the TPs and fostering learning transfer, by identifying the barriers to be
removed and the positive reinforcements to be provided; and
creating a work environment which encourages the use of targeted behaviour once
trainees come back to the workplace; in the teams view, transfer climate (i.e.
situational cues and consequences that largely determine whether or not learned
competencies are applied in the workplace), supervisor and peer support, avail-
ability of resources, opportunities to apply new skills and abilities to the work-
place, and additional learning support (follow-up) are the most powerful drivers
for creating such a rich work environment (Grossman & Salas, 2011).

Learning and growth perspective


The learning and growth perspective deals with the enabling factors needed to achieve
excellent results in the previous perspectives. Three critical drivers were included by
participants in the learning and growth perspective:
developing the competencies of the training staff (training competencies);
implementing new technologies, such as distance learning and simulation learning
systems (training technologies);
creating strong partnerships with key external and internal partners, such as
medical associations, universities, the top management team and the clinical direc-
torates (partnerships).
The teams acknowledged that the critical issues in the four perspectives are linked by
cause-and-effect relationships (Kaplan & Norton, 2004, p. 32). Thus, organizational
outcomes can be achieved only if trainees improve their performance. The internal
perspective focuses on how to create and deliver valuable training processes for train-
ees. Finally, the learning and growth perspective shows how to support the internal
processes, thus providing the foundation of the training strategy. Aligning objectives in
these four perspectives is the key to defining and delivering a valuable and internally
consistent training strategy.
During the action-taking phase, participants and researchers measured the results
achieved in each KPA through a focused selection of key performance indicators
(KPIs). Each team selected a list of KPIs and tested them in its HCO by defining targets

188 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
and evaluating the variances between targets and actual results. Table 1 reports the KPIs
commonly selected by the HCOs involved in the project. For instance, as the KPA
partnerships included in the learning and growth perspective is about creating strong
partnerships with key external and internal partners (i.e. medical associations, univer-
sities and HCOs managers), the following KPIs were selected:
per cent of training hours delivered in partnership with medical associations;
per cent of training hours delivered in partnership with universities; and
per cent of managers involved in training planning.
Most KPIs were broken down by TPs and/or responsibility centres (see Table 1,
drill-down column).
During the evaluating phase, the teams presented their findings, resulting from each
HCOs implementation, and discussed the feasibility and usefulness of their training
scorecards. First, it was recognized that the BSC can be applied effectively to the
training function. One participant observed that even if the design of the training
scorecard had required a conceptual effort and its testing turned out to be time-
consuming, it is now a lean and necessary management tool.
Second, the BSC was acknowledged as a useful tool for defining and delivering the
HCOs training strategy. In particular, the teams agreed that the definition of the
strategy map has helped them to:
develop a better understanding of the training functions mission, balancing dif-
ferent perspectives;
translate the mission of the training function into clear and specific objectives
related to each KPA, thus focusing on the training priorities to be achieved; and
align the different stakeholders involved in the training process (i.e. top manage-
ment, clinical directorates, training manager, training assistants, external partners
and so on).
Third, the BSC was deemed to be a helpful tool for evaluating the training function
performance by:
showing useful data supporting decision making in terms of resource allocation to
training investments;
offering a specific set of (a critical few) measures for evaluating if and how the
training function is actually achieving its objectives, thus overcoming subjective
and informal judgements.

Functional versus programme scorecards


During the evaluating phase, participants and researchers also discussed how to link
the BSC of the training function with the BSC of the TPs (Baraldi & Cifalin, 2009). By
comparing the structure of the functional scorecard with that of the programme score-
card (see Figure 2), the teams agreed that the use of the BSC could foster a sound
alignment between functional goals and programme objectives.
In particular, the following findings emerged. First, it was recognized that the basic
structure of the BSC (i.e. perspectives and KPAs) remains almost unchanged when
moving from the programme to the functional level of analysis. As shown by the
arrows in Figure 2, the training performances reported in the programme scorecard are
linked with those of the functional scorecard. Various linkages between the two score-
cards can be observed:
The KPA organizational results included in the performance perspective of the
programme scorecard are linked with one or more of the four KPAs in the
organizational perspective of the functional scorecard (i.e. clinical outcomes, inno-
vation, quality, productivity).
There is a linkage between the two KPA behaviour, respectively represented in
the performance perspective of the programme scorecard and in the trainee per-
spective of the functional scorecard.

Delivering training strategies 189


2015 John Wiley & Sons Ltd.
Table 1: Examples of key performance indicators

Perspectives Key performance Key performance indicators Drill-down,


areas if any

Organizational Clinical Patients pain intensity TPs; RCs


perspective outcomes % of surgical complications TPs; RCs
Innovation % of new procedures implemented TPs; RCs
Quality % of hospital pain scores applied TPs; RCs
% of clinical risk management tools applied TPs; RCs
% of clinical audits applied TPs; RCs
Productivity Number of medical, surgical and TPs; RCs
administrative procedures per time unit
Trainee Satisfaction Immediate reaction score TPs; RCs
perspective Average attendance rate TPs; RCs
Postponed perceived usefulness TPs; RCs
Learning Learning gain (prepost test) TPs; RCs
% of people that received a pass grade TPs; RCs
Coach evaluation score TPs; RCs
Average number of CME (Continuing Medical TPs; RCs
Education) credits
Behaviour % of nurses applying hospital pain scores TPs; RCs
% of medical doctors attending clinical audits TPs; RCs
Average number of drugs and medications to TPs; RCs
treat pain prescribed by each medical doctor
Average number of new surgical procedures TPs; RCs
applied by each medical doctor
Internal process Creating Stakeholder commitment ratio TPs; RCs
perspectives consensus % of training programs included in TPs; RCs
management by objectives plans
Planning % of responsibility centres receiving a training RCs
need assessment
% of people receiving a training need RCs
assessment
Compliance to standards of training planning TPs
% training programs realized without being TPs
previously included in the training plan
Delivering % of trained people TPs; RCs
% of training programs included in the TPs
training plan and not realized
Compliance to standards of training delivery TPs
% of training programs ended on time TPs
Average hourly cost of training per trainee TPs; RCs
Average training program cost, net of TPs
revenues and charges of external
participants
Evaluating % of training programs that received a TPs; RCs
behavioural evaluation
% of training programs that received an TPs; RCs
organizational performance evaluation
Managing work % of training programs receiving a TPs; RCs
environment reinforcement, if needed
Learning and Competencies Number of training hours for the training staff TPs
growth Training staff turnover
perspective Strategic skill coverage ratio for the training
staff
Technologies Value of training technology investments
Extent of distance learning use RCs
Extent of simulation learning use RCs
Partnerships % of training hours delivered in partnership
with medical associations
% of training hours delivered in partnership
with universities
% of managers involved in training planning TPs; RCs

TP = training programme; RC = responsibility centre.

190 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
Figure 2: Functional-level versus programme-level training scorecards.

Similarly, the KPA skills in the learning perspective of the programme scorecard
are linked with the KPA learning in the trainee perspective of the functional
scorecard.
There is another linkage between the two KPAs labelled as satisfaction, respec-
tively represented in the learning perspective of the programme scorecard and in
the trainee perspective of the functional scorecard.
Finally, the KPAs in the delivery and design perspectives of the programme score-
card (i.e. time, quality, cost, consensus and training project) flow into the KPAs in
the internal process perspective of the functional scorecard (i.e. creating consensus,
planning, delivering, evaluating, managing work environment).
As a result, the programme scorecard turned out to be a sort of sub-system of the
functional scorecard, focusing on the strategic contribution of a given programme to
the successful implementation of the whole training strategy. Of course, the BSC at the
functional level copes with critical issues that go well beyond programme management
(for instance, developing innovation in terms of training competencies, technologies
and partnerships) and, not surprisingly, reflects a greater level of complexity. As such,
the learning and growth perspective of the functional scorecard is not linked with any
perspective of the programme scorecard.
Second, participants observed that many KPIs used to measure the performances
achieved over the various KPAs and perspectives of the functional scorecard can be
broken down by TPs (see Table 1, drill-down column). Said differently, as a same KPI
can be used to evaluate training at both the functional and programme levels of
analysis, many KPIs collected for evaluating TPs can be easily aggregated and adopted
in the functional scorecard. For instance, the immediate reaction score used to evaluate
the KPA satisfaction in the trainee perspective of the functional scorecard can be
obtained by aggregating the specific immediate reaction score reported for each train-
ing programme.
Finally, participants agreed that, to be effective, the BSC should focus just on a
selection of TPs that may flow into the functional scorecard. Many selection criteria

Delivering training strategies 191


2015 John Wiley & Sons Ltd.
were discussed. From a quantitative point of view, participants agreed to focus on those
programmes that are most time-consuming (in terms of number of training hours) and
costly (in terms of external teaching costs, organizational costs and opportunity costs).
From a qualitative point of view, participants agreed to prioritize to the most strategic
programmes (i.e. addressed to change agents, sponsored by top management commit-
ment, aimed at achieving breakthrough and ambitious results) and/or innovative ones
(i.e. programmes relying on a new faculty, using new training methods or technologies,
developing new subjects, etc.).

Discussion and conclusions


The findings of this study extend the research on training evaluation according to a
strategic management perspective (Brinkerhoff, 2006; Kraiger, 2002; Kraiger et al., 2004;
Salas et al., 2012; Tannenbaum, 2002). To our knowledge, this is the first field study
arguing that the BSC is a feasible and actionable approach to evaluating the contribu-
tion of the training function to the achievement of organizational goals from a strategic
management perspective. Indeed, the research findings indicate that the BSC allows to
evaluate if and how a training function is actually performing in terms of both defining
and executing its strategy, thus strengthening the link of training with organizational
goals. More in particular, four findings can be discussed as follows.
First, our findings show that the BSC approach allows us to visualize a functional
training strategy, that is to say a clearly focused and internal consistent map of cause-
and-effect relationships turning the functional training efforts into strategic results.
This study indicates that the functional scorecard describes in a meaningful and action-
able way how desired outcomes from the organizational and trainee perspectives
depend on the outstanding performance achieved in delivering a limited number of
critical internal processes that, in turn, are enabled by a limited set of learning and
growth factors. As such, it could be argued that our findings expand the existing
literature in two ways. On the one hand, they offer a feasible model for the design and
delivery of training strategies, thus indicating how to perform the principle of strategic
alignment of training (Salas et al., 2012; Tannenbaum, 2002). On the other hand, they
offer a focused and internal consistent view that clarifies, from a strategic perspective,
how to facilitate a smooth vertical transfer of training (Aguinis & Kraiger, 2009; Chen
& Klimoski, 2007; Kozlowski et al., 2000; Saks & Burke-Smalley, 2014).
Second, this study shows that the BSC effectively supports the training function both
in managing the training processes and in delivering targeted organizational outcomes,
thus bridging the gap between an operational and a strategic approach to training
evaluation (Baraldi & Cifalin, 2009). On the one hand, the operational approach to
training evaluation, rooted in the hierarchical perspective, allows to dispose of a wide
and well-known range of methods aimed at improving the quality of TPs (namely, it
focuses on how to do training right) (for overviews, see Aragn-Sanchez et al., 2003;
Arthur et al., 2003; Salas et al., 2012; Salas & Cannon-Bowers, 2001; Tannenbaum &
Yukl, 1992). On the other hand, this study shows that the BSC approach focuses on a
limited set of training priorities leading to targeted organizational and behavioural
outcomes, thus proving a strategic frame to training evaluation (namely, it focuses on
how to do the right training). Of course, some of the KPIs included in the functional
training scorecard are selected within the measures suggested by hierarchical
models of training evaluation. Therefore, it can be confirmed that the contextual use of
the hierarchical approach and the BSC approach is mutually beneficial not only to
evaluate a training programme (Baraldi & Cifalin, 2009), but also to evaluate a training
function.
Third, our findings show that the BSC offers a specific set of strategic (critical few)
measures for evaluating the training function performance. As a result, this study
contributes to the literature advocating the need to move beyond traditional evaluation
methodologies in order to embed training interventions within organizational systems
(Brinkerhoff, 2006; Glaveli & Karassavidou, 2011; Kirkpatrick & Kirkpatrick, 2006;
Kraiger, 2002; Kraiger et al., 2004; Pangarkar & Kirkwood, 2008). Indeed, basing on our

192 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
findings, it could be argued that the BSC approach allows to shift the focus from the
training programme evaluation to the evaluation of how effectively the organization
frames and uses training systems and resources and leverages them into improved
performance that in turn drives business results.
Fourth, this study indicates that the BSC approach fosters a sound alignment
between training programme objectives and functional goals. According to our find-
ings, as the functional scorecard can be linked effectively with the programme score-
card, the BSC approach can provide an insightful view of (1) why certain TPs are
selected and how they are managed; and (2) how they actually contribute to the
successful execution of a given training strategy. As such, these findings allow to
expand our previous explorative research suggesting the use of the BSC approach to
evaluate the performance of a given set of TPs (Baraldi & Cifalin, 2009).
From a multidisciplinary research perspective, this study has various theoretical
implications. First, it offers an actionable approach to developing multilevel research in
training evaluation, bridging the individual and organizational levels of analysis
(Aguinis & Kraiger, 2009; Chen & Klimoski, 2007; Fulmer & Ployhart, 2014; Kozlowski
et al., 2000; Ployhart & Moliterno, 2011; Saks & Burke-Smalley, 2014; Tharenou et al.,
2007). In particular, if a functional training scorecard allows those involved to focus on
the most critical issues around which a training strategy is actually defined and
executed from different (but interconnected) perspectives of evaluation, including the
organizational and the trainee ones, it could be argued that the BSC approach contrib-
utes to integrate training evaluation at the organizational and individual levels of
analysis.
Second, our findings also contribute to linking the micro- and macro-training
research streams, respectively rooted in the industrial/organizational psychology lit-
erature and in the strategic HRM literature (Saks & Burke-Smalley, 2014). On the one
hand, the design of training scorecards incorporates many issues developed in the
micro-training research (for overviews, see Aragn-Sanchez et al., 2003; Arthur et al.,
2003; Salas et al., 2012; Salas & Cannon-Bowers, 2001; Tannenbaum & Yukl, 1992). On
the other hand, training scorecards can also be viewed as one of the strategic HRM
practices intended to enable an organization to achieve its goals (Nyberg et al., 2014;
Storey, 1995; Wall & Wood, 2005; Wright & McMahan, 1992; Wright et al., 2001).
Third, from a performance measurement and management perspective, our findings
extend the research on the cascading of the BSC throughout the organization. Whereas
previous studies have focused on the overall HR function (Becker et al., 2001; Huselid
& Becker, 2011; Huselid et al., 2005; Walker & MacDonald, 2001), this study specifically
focuses on the training function and points out how a successful delivery of the
training strategy actually contributes to the attainment of organizational goals.
This study may have different implications for practitioners too. Training managers
may replicate the use of the BSC, as we have found that it is a feasible and affordable
approach to training evaluation. Many practical benefits may be derived from the
design and the use of training scorecards. During its design, the functional scorecard
clarifies the mission of training function and translates it into operational terms. Once
designed, functional scorecards can be used to manage the execution of the training
strategy, communicating the trainee and organizational outcomes and creating stake-
holder consensus. Finally, a good balance between leading and lagging indicators can
help training managers to properly differentiate short- and long-term objectives and
receive prompt feedback on the soundness of the hypotheses their strategy is based on
(e.g. do investments in learning and growth factors actually lead to better performance
in training processes? To what extent do excellent processes really improve trainees
competencies and behaviour? etc.).
The limitations of this research concern the explorative nature, the focus on a single
sector and the specific geography. We tested the feasibility and usefulness of using the
BSC approach to evaluate the training function performance in three Italian private
HCOs only. Further research from a more international and cross-sectional perspective
is needed. Moreover, research could be devoted to investigating the organizational
factors facilitating and/or inhibiting an effective use of the BSC approach in training

Delivering training strategies 193


2015 John Wiley & Sons Ltd.
evaluation. This analysis may take advantage of two existing literature streams: first, the
studies that address the issues arising in BSC implementation and use (e.g. Agostino &
Arnaboldi, 2011); second, the literature that focuses on the facilitating factors and/or
obstructing barriers to conducting behaviour-based and results-based evaluations of
training in organizations (e.g. Kennedy et al., 2014). For instance, we would expect that
the use of training scorecards is more effective in organizations that have already
introduced (and cascaded) the BSC and use it properly as a performance management
system (e.g. for allocating resources, setting objectives and rewarding).
Finally, our research has focused on the implementation of the training scorecard,
whereas we have obtained no evidence on its use and effectiveness in the long-term.
Further research might benefit from a longitudinal approach in order to gain a better
understanding of the real impact generated by a systematic use of training scorecards,
not only from an organizational perspective, but also from a training manager one. For
instance, it could be interesting to investigate if the adoption of the BSC approach
increases the accountability of the training function and/or the integration of the
training manager within the organizational strategic management process. This latter
analysis may take advantage of the existing studies suggesting that critical issues in
strategy implementation are the understanding of and the consensus on organizational
objectives and how to contribute to them (e.g. Boswell, 2006; Bowman & Ambrosini,
1997; Chenhall, 2005; Ho et al., 2014; Walter et al., 2013).

References
Agostino, D. and Arnaboldi, M. (2011), How the BSC implementation process shapes its
outcome, International Journal of Productivity and Performance Management, 60, 2, 99114.
Aguinis, H. (2013), Performance Management, 3rd edn (Upper Saddle River, NJ: Pearson Prentice
Hall).
Aguinis, H. and Kraiger, K. (2009), Benefits of training and development for individual and
teams, organizations, and society, Annual Review of Psychology, 60, 1, 45174.
Alliger, G. M. and Janak, E. A. (1989), Kirkpatricks levels of training criteria: thirty years later,
Personnel Psychology, 42, 2, 33142.
Alliger, G. M., Tannenbaum, S. I., Bennet, W., Trave, H. and Shotland, A. (1997), A meta-analysis
of the relations among training criteria, Personnel Psychology, 50, 2, 34158.
Alvarez, K., Salas, E. and Garofano, C. M. (2004), An integrated model of training evaluation and
effectiveness, Human Resource Development Review, 3, 4, 385416.
Aragn-Sanchez, A., Barba-Aragn, I. and Sanz-Valle, R. (2003), Effects of training on business
results, International Journal of Human Resource Management, 14, 6, 95680.
Argyris, C. (1993), Knowledge for Action: A Guide to Overcoming Barriers to Organizational Change (San
Francisco, CA: Jossey-Bass).
Argyris, C. and Schon, D. A. (1989), Participatory action research and action science compared,
American Behavioral Scientist, 32, 5, 61223.
Arthur, W., Bennett, W., Edens, P. S. and Bell, S. T. (2003), Effectiveness of training in organiza-
tions: a meta-analysis of design and evaluation features, Journal of Applied Psychology, 88, 2,
23445.
Baldwin, T. T., Ford, K. J. and ad Blume, B. D. (2009), Transfer of training 19882008: an updated
review and agenda for future research, International Review of Industrial and Organizational
Psychology, 24, 4170.
Baraldi, S. and Cifalin, A. (2009), Training programs and performance measurement: evidence
from healthcare organizations, Journal of Human Resource Costing and Accounting, 13, 4, 294
315.
Barney, J. B. (1991), Firm resources and sustained competitive advantage, Journal of Management,
17, 1, 99120.
Barney, J. B. and Wright, P. M. (1998), On becoming a strategic partner: the role of human
resources in gaining competitive advantage, Human Resource Management, 37, 1, 3146.
Barney, J. B., Wright, P. M. and Ketchen, D. J. Jr. (2001), The resource-based view of the firm: ten
years after 1991, Journal of Management, 27, 6, 62541.
Bate, P. (2000), Synthesizing research and practice: using the action research approach in health
care settings, Social Policy and Administration, 34, 4, 47893.
Bates, R. (2004), A critical analysis of evaluation practice: the Kirkpatrick model and the principle
of beneficence, Evaluation and Program Planning, 27, 3, 3417.

194 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
Baxter, J. and Chua, W. F. (2008), The field researcher as author-writer, Qualitative Research in
Accounting and Management, 5, 2, 10121.
Becker, B. E. and Huselid, M. A. (2006), Strategic human resources management: where do we go
from here?, Journal of Management, 32, 6, 898925.
Becker, B. E., Huselid, M. A. and Ulrich, D. (2001), The HR Scorecard (Boston, MA: Harvard
Business School Press).
Bento, A. and White, L. F. (2010), An exploratory study of strategic performance measurement
systems, Advances in Management Accounting, 18, 1, 126.
Berry, A. J., Coad, A. F., Harris, E. P., Otley, D. T. and Stringer, C. (2009), Emerging themes in
management control: a review of recent literature, British Accounting Review, 41, 1, 220.
Bisbe, J. and Malagueo, R. (2012), Using strategic performance measurement systems for
strategy formulation: does it work in dynamic environments?, Management Accounting
Research, 23, 4, 296311.
Blume, B. D., Ford, J. K., Baldwin, T. T. and Huang, J. L. (2010), Transfer of training: a meta-
analytic review, Journal of Management, 36, 4, 1065105.
Boswell, W. (2006), Aligning employees with the organizations strategic objectives: out of line
of sight, out of mind, International Journal of Human Resource Management, 17, 9, 1489511.
Bouckaert, G. and Peters, B. G. (2002), Performance measurement and management. The Achil-
les heel in administrative modernization, Public Performance & Management Review, 25, 4,
35962.
Bowman, C. and Ambrosini, V. (1997), Perceptions of strategic priorities, consensus and firm
performance, Journal of Management Studies, 34, 2, 24158.
Brinkerhoff, R. O. (2006), Increasing impact of training investments: an evaluation strategy for
building organizational learning capability, Industrial and Commercial Training, 38, 6, 3027.
Burke, L. A. and Hutchins, H. M. (2008), A study of best practices in training transfer and
proposed model of transfer, Human Resource Development Quarterly, 19, 2, 10728.
Burney, L. L., Henle, A. and Widener, S. K. (2009), A path model examining the relations among
strategic performance measurement system characteristics, organizational justice, and extra-
and in-role performance, Accounting, Organizations and Society, 34, 3/4, 30521.
Cassel, C. and Johnson, P. (2006), Action research: explaining the diversity, Human Relations, 59,
6, 783814.
Chan, Y. L. and Seaman, A. (2009), Strategy, structure, performance management, and
organizational outcome: application of balanced scorecard in Canadian health care organiza-
tions, Advances in Management Accounting, 17, 1, 15180.
Chen, G. and Klimoski, R. J. (2007), Training and development of human resources at work: is the
state of our science strong?, Human Resource Management Review, 17, 2, 18090.
Chenhall, R. H. (2005), Integrative strategic performance measurement systems, strategic align-
ment of manufacturing, learning and strategic outcomes: an exploratory study, Accounting,
Organizations and Society, 30, 5, 395422.
Chiang, B. (2009), System integration and the balanced scorecard: an empirical study of system
integration to facilitate the balanced scorecard in the health care organizations, Advances in
Management Accounting, 17, 1, 181201.
Chua, W. F. (2007), Accounting, measuring, reporting and strategizing re-using verbs: a review
essay, Accounting, Organizations and Society, 32, 4/5, 48794.
Coghlan, D. (2004), Action research in the academy: why and whiter? Reflections on the chang-
ing nature of research, The Irish Journal of Management, 25, 2, 110.
De Geuser, F., Mooraj, S. and Oyon, D. (2009), Does the balanced scorecard add value? Empirical
evidence on its effect on performance, European Accounting Review, 18, 1, 93122.
Delery, J. E. and Doty, D. H. (1996), Modes of theorizing in strategic human resource manage-
ment: tests of universalistic, contingency, and configurational performance predictions,
Academy of Management Journal, 39, 4, 80235.
Eden, C. and Huxham, C. (1996), Action research for management research, British Journal of
Management, 7, 1, 7586.
Fendt, J. and Kaminska-Labb, R. (2011), Relevance and creativity through design-driven action
research: introducing pragmatic adequacy, European Management Journal, 29, 3, 21733.
Ferreira, A. and Otley, D. (2009), The design and use of performance management systems: an
extended framework for analysis, Management Accounting Research, 20, 4, 26382.
Franco-Santos, M., Lucianetti, L. and Burne, M. (2012), Contemporary performance measure-
ment systems: a review of their consequences and a framework for research, Management
Accounting Research, 23, 2, 79119.
French, S. (2009), Action research for practicing managers, Journal of Management Development,
28, 3, 187203.

Delivering training strategies 195


2015 John Wiley & Sons Ltd.
French, W. L. and Bell, C. H. (1984), Organization Development: Behavioural Science Interventions for
Organization Improvement (Englewood Cliffs, NJ: Prentice Hall).
Fulmer, I. S. and Ployhart, R. E. (2014), Our most important asset: a multidisciplinary/
multilevel review of human capital valuation for research and practice, Journal of Management,
40, 1, 16192.
Gimbert, X., Bisbe, J. and Mendoza, X. (2010), The role of performance measurement systems in
strategy formulation processes, Long Range Planning, 43, 4, 47797.
Glaveli, N. and Karassavidou, E. (2011), Exploring a possible route through which training
affects organizational performance: the case of a Greek bank, International Journal of Human
Resource Management, 22, 4, 2892923.
Grohmann, A. and Kauffeld, S. (2013), Evaluating training programs: development and corre-
lates of the questionnaire for professional training evaluation, International Journal of Training
and Development, 17, 2, 13555.
Grossman, R. and Salas, E. (2011), The transfer of learning: what really matters, International
Journal of Training and Development, 15, 2, 10320.
Gummesson, E. (1991), Qualitative Methods in Management Research (London: Sage).
Ho, J. L. Y., Wu, A. and Wu, S. Y. C. (2014), Performance measures, consensus on strategy
implementation, and performance: evidence from the operational-level of organizations,
Accounting, Organizations and Society, 39, 1, 3858.
Hoque, Z. (2014), 20 years of studies in balanced scorecard: trends, accomplishments, gaps and
opportunities for future research, British Accounting Review, 46, 1, 3359.
Huselid, M. A. and Becker, B. E. (2011), Bridging micro and macro domains: workforce differ-
entiation and strategic human resource management, Journal of Management, 37, 2, 4218.
Huselid, M. A., Becker, B. E. and Beatty, R. W. (2005), The Workforce Scorecard: Managing Human
Capital to Execute Strategy (Boston, MA: Harvard Business School Press).
Inamdar, N., Kaplan, R. S. and Bower, M. (2002), Applying the balanced scorecard in healthcare
provider organizations, Journal of Healthcare Management, 47, 3, 17995.
Ittner, C. D., Larcker, D. F. and Randall, T. (2003), Performance implications of strategic perfor-
mance measurement in financial services firms, Accounting, Organizations and Society, 28, 78,
71541.
Jiang, K., Lepak, D. P., Hu, J. and Baer, J. C. (2012), How does human resource management
influence organizational outcomes? A meta-analytic investigation of mediating mechanisms,
Academy of Management Journal, 55, 6, 126494.
Jonsson, S. and Lukka, K. (2006), There and Back Again. Doing Interventionist Research in
Management Accounting, in C. Chapman, A. Hopwood and M. D. Shields (eds), Handbook of
Management Accounting Research (Oxford: Elsevier), pp. 37398.
Kaplan, R. S. (1998), Innovation action research: creating new management theory and practice,
Journal of Management Accounting Research, 10, 1, 89118.
Kaplan, R. S. and Norton, D. P. (1992), The balanced scorecard: measures that drive perfor-
mance, Harvard Business Review, 70, 1, 719.
Kaplan, R. S. and Norton, D. P. (1996), The Balanced Scorecard (Boston, MA: Harvard Business
School Press).
Kaplan, R. S. and Norton, D. P. (2001), The Strategy-Focused Organization (Boston, MA: Harvard
Business School Press).
Kaplan, R. S. and Norton, D. P. (2004), Strategy Maps (Boston, MA: Harvard Business School Press).
Kennedy, P. E., Chyung, S. Y., Winiecki, D. J. and Brinkerhoff, R. O. (2014), Training profession-
als usage and understanding of Kirkpatricks level 3 and level 4 evaluations, International
Journal of Training and Development, 18, 1, 121.
Kirkpatrick, D. L. (1959a), Techniques for evaluating training programs, Journal of ASTD, 13, 11,
39.
Kirkpatrick, D. L. (1959b), Techniques for evaluating training programs: part 2 learning, Journal
of ASTD, 13, 12, 216.
Kirkpatrick, D. L. (1960a), Techniques for evaluating training programs: part 3 behavior, Journal
of ASTD, 14, 1, 1318.
Kirkpatrick, D. L. (1960b), Techniques for evaluating training programs: part 4 results, Journal
of ASTD, 14, 2, 2832.
Kirkpatrick, D. L. and Kirkpatrick, J. D. (2005), Transferring Learning to Behavior (San Francisco, CA:
Berrett-Koehler).
Kirkpatrick, D. L. and Kirkpatrick, J. D. (2006), Evaluating Training Programs: The Four Levels (San
Francisco, CA: Berrett-Koehler).
Kolehmainen, K. (2010), Dynamic strategic performance measurement systems: balancing
empowerment and alignment, Long Range Planning, 43, 4, 52754.

196 International Journal of Training and Development


2015 John Wiley & Sons Ltd.
Kozlowski, S. W. J., Brown, K. G., Weissbein, D. A., Cannon-Bowers, J. A. and Salas, E. (2000), A
Multilevel Approach to Training Effectiveness: Enhancing Horizontal and Vertical Transfer, in
K. J. Klein and S. W. J. Kozlowski (eds), Multilevel Theory, Research, and Methods in Organizations
(San Francisco, CA: Jossey Bass), pp. 157210.
Kraiger, K. (ed.) (2002), Creating, Implementing, and Maintaining Effective Training and Development:
State-of-the-Art Lessons for Practice (San Francisco, CA: Jossey-Bass).
Kraiger, K., McLinden, D. and Casper, W. J. (2004), Collaborative planning for training impact,
Human Resource Management, 43, 4, 33751.
Labro, E. and Tuomela, T. S. (2003), On bridging more action into management accounting
research: process considerations based on two constructive case studies, European Accounting
Review, 12, 3, 40942.
Lapsley, I. (2004), Responsibility accounting revived? Market reforms and budgetary control in
health care, Management Accounting Research, 5, 34, 33752.
Lewin, K. (1946), Action research and minority problems, Journal of Social Issues, 2, 4, 3446.
Lewin, K. (1947), Frontiers in group dynamics, Human Relations, 1, 5, 14353.
McAlearney, A. S. (2010), Executive leadership development in U.S. health systems, Journal of
Healthcare Management, 55, 3, 20622.
McKinnon, J. (1988), Reliability and validity in field research: some strategies and tactics,
Accounting, Auditing and Accountability Journal, 1, 1, 3454.
Micheli, P. and Manzoni, J. F. (2010), Strategic performance measurement: benefits, limitations
and paradoxes, Long Range Planning, 43, 4, 46576.
Mollick, E. (2012), People and process, suits and innovators: the role of individuals in firm
performance, Strategic Management Journal, 33, 9, 100115.
Northcott, D. and Taulapapa, T. M. (2012), Using the balanced scorecard to manage performance
in public sector organizations. Issues and challenges, International Journal of Public Sector
Management, 25, 3, 16691.
Nyberg, A. J., Moliterno, T. P., Hale, D. Jr. and Lepak, D. P. (2014), Resource-based perspec-
tives on unit-level human capital: a review and integration, Journal of Management, 40, 1,
31646.
Otley, D. (1999), Performance management: a framework for management control systems
research, Management Accounting Research, 10, 4, 36382.
Pangarkar, A. M. and Kirkwood, T. (2008), Strategic alignment: linking your learning strategy to
the balanced scorecard, Industrial and Commercial Training, 4, 2, 95101.
Patel, P. C., Messersmith, J. G. and Lepak, D. P. (2013), Walking the tightrope: an assessment of the
relationship between high-performance work systems and organizational ambidexterity,
Academy of Management Journal, 56, 5, 142042.
Paul, A. K. and Anantharaman, R. N. (2003), Impact of people management practices on
organizational performance: analysis of a causal model, International Journal of Human Resource
Management, 14, 7, 124666.
Ployhart, R. E. and Moliterno, T. P. (2011), Emergence of the human capital resource: a multilevel
model, Academy of Management Review, 36, 1, 12750.
Prowse, P. and Prowse, J. (2010), Whatever happened to human resource management
performance?, International Journal of Productivity and Performance Management, 59, 2, 14562.
Quinn, J. B., Anderson, P. and Finkelstein, S. (1996), Managing professional intellect: making the
most of the best, Harvard Business Review, 74, 2, 7180.
Robinson, D. G. and Robinson, J. C. (1998), Moving from Training to Performance (San Francisco, CA:
Berrett-Koehler).
Saks, A. M. and Burke, L. A. (2012), An investigation into the relationship between training
evaluation and the transfer of training, International Journal of Training and Development, 16, 2,
11827.
Saks, A. M. and Burke-Smalley, L. A. (2014), Is transfer of training related to firm performance?,
International Journal of Training and Development, 18, 2, 10415.
Salas, E. and Cannon-Bowers, J. A. (2001), The science of training: a decade of progress, Annual
Review of Psychology, 52, 1, 47199.
Salas, E., Tannenbaum, S. I., Kraiger, K. and Smitg-Jentsch, K. A. (2012), The science of training
and development in organizations: what matters in practice, Psychological Science in the Public
Interest, 13, 2, 74101.
Skrbk, P. and Tryggestad, K. (2010), The role of accounting devices in performing corporate
strategy, Accounting, Organizations and Society, 35, 1, 10824.
Soderberg, S., Kalagnanam, S., Sheehan, N. T. and Vaidyanathan, G. (2010), When is a balanced
scorecard a balanced scorecard?, International Journal of Productivity and Performance Manage-
ment, 60, 7, 688708.

Delivering training strategies 197


2015 John Wiley & Sons Ltd.
Storey, J. (1995), Human Resource Management: A Critical Text (London: Routledge).
Susman, G. I. and Evered, R. D. (1978), An assessment of the scientific merits of action research,
Administrative Science Quarterly, 23, 4, 582603.
Tannenbaum, S. I. (2002), A Strategic View of Organizational Training and Learning, in K.
Kraiger (ed.), Creating, Implementing, and Maintaining Effective Training and Development: State
of-the-Art Lessons for Practice (San Francisco, CA: Jossey Bass), pp. 1052.
Tannenbaum, S. I. and Yukl, G. (1992), Training and development in work organizations, Annual
Review of Psychology, 43, 1, 399441.
Tharenou, P., Saks, A. M. and Moore, C. (2007), A review and critique of research on training and
organizational-level outcomes, Human Resource Management Review, 17, 3, 25173.
Tian, J., Atkinson, N. L., Portnoy, B. and Gold, R. S. (2007), A systematic review of evaluation in
formal continuing medical education, Journal of Continuing Education in the Health Professions,
27, 1, 1627.
Ulrich, D. (1989), Assessing human resource effectiveness: stakeholder, utility, and relationship
approaches, Human Resource Planning, 12, 4, 30116.
Ulrich, D. (1997), Measuring human resources: an overview of practice and a prescription for
results, Human Resource Management, 36, 3, 30320.
Ulrich, D. (2005), The HR Value Proposition (Boston, MA: Harvard Business School Press).
van de Ven, A. and Johnson, P. E. (2006), Knowledge for theory and practice, Academy of
Management Review, 31, 4, 80221.
Walker, G. and MacDonald, J. R. (2001), Designing and implementing an HR scorecard, Human
Resource Management, 40, 4, 36577.
Wall, T. and Wood, S. (2005), The romance of human resource management and business per-
formance, and the case of the big science, Human Relations, 58, 4, 42962.
Walter, J., Kellermanns, F. W., Floyd, S. W., Veiga, J. F. and Matherne, C. (2013), Strategic
alignment: a missing link in the relationship between strategic consensus and organizational
performance, Strategic Organization, 11, 3, 30428.
Warr, P. B. and Bunce, D. (1995), Trainee characteristics and the outcomes of open learning,
Personnel Psychology, 48, 2, 34775.
Westbrook, R. (1995), Action research: a new paradigm for research in production and operations
management, International Journal of Operations & Production Management, 15, 12, 620.
Wiersma, E. (2009), For which purposes do managers use Balanced Scorecards? An empirical
study, Management Accounting Research, 20, 4, 23951.
Wright, P. M. and McMahan, G. C. (1992), Theoretical perspectives for strategic human resource
management, Journal of Management, 18, 2, 295320.
Wright, P. M., McMahan, G. C. and McWilliams, A. (1994), Human resources and sustained
competitive advantage: a resource-based perspective, International Journal of Human Resource
Management, 5, 2, 30126.
Wright, P. M., Dunford, B. B. and Snell, S. A. (2001), Human resources and the resource based
view of the firm, Journal of Management, 27, 6, 70121.
Wright, P. M., Gardner, T. M., Moynihan, L. M. and Allen, M. R. (2005), The relationship between
HR practices and firm performance: examining causal order, Personnel Psychology, 58, 2,
40946.
Zelman, W. N., Pink, G. H. and Matthias, C. B. (2003), Use of balanced scorecard in health care,
Journal of Healthcare Finance, 20, 4, 116.

198 International Journal of Training and Development


2015 John Wiley & Sons Ltd.

You might also like