You are on page 1of 19

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1328-7265.

htm

Information technology evaluation: issues and challenges


Govindan Marthandan and Chun Meng Tang
Faculty of Management, Multimedia University, Cyberjaya, Malaysia
Abstract
Purpose To justify an increase in information technology (IT) spending and to understand utilization of limited organizational resources on IT, the correlation between IT and business performance has been of great interest to business managers. However, business managers face issues and challenges in finding out how and to what extent IT is able to deliver the intended benefits. The purpose of this paper is to examine IT evaluation issues and challenges faced by information systems (IS) researchers, IS specialists, and business managers. Design/methodology/approach This paper begins by reviewing the disparate discussions in past literature on IT evaluation issues and challenges. It then provides a synthesis of the disparate discussions by identifying eight issues and challenges in IT evaluation. Findings The eight issues and challenges identified are: evaluation scope, evaluation timing, unit of analysis, level of analysis, different perspectives, different dimensions, different measures, and underpinning theoretical frameworks. It concludes with some suggestions on ways to improve IT evaluation practices. Originality/value This paper posits that before a pragmatic IT evaluation approach can be developed, it is necessary to first understand the issues and challenges faced by IS researchers, IS specialists, and business managers in IT evaluation. Having identified the eight issues and challenges, this paper provides pointers on what needs to be considered when conducting IT evaluation. Keywords Communication technologies, Decision making, Value analysis Paper type General review

Information technology evaluation 37

1. Introduction As information technology (IT) advances dramatically with new features and capabilities, moving away from the data processing era to a strategic information systems (IS) era, efficiency measures are no longer sufficient to illustrate the true business value of IT. Over the years, expectations of business managers on IT value have gradually shifted from operational-centered to strategic-focused. How fast a processor runs or how many pages a printer prints might not interest business managers. Instead, business managers are interested in the strategic advantages that IT brings. As highlighted by Seddon et al. (2002), IT evaluation is slowly shifting from the technical or financial perspective to a business-oriented one. Thus, it is many times more difficult to evaluate IT now than in the past as we are looking at not only tangible but also intangible benefits. Traditionally, business managers justify their IT spending with the help of financial techniques, e.g. payback period, net present value, and internal rate of return, that are common for evaluation of capital investments. However, the effectiveness of financial techniques in evaluating IT investments is debatable as IT investments are different from other capital investments especially when we cannot estimate hidden, intangible, and non-financial benefits (Ballantine and Stray, 1998; Irani and Love, 2000/ 2001). For example, return on investment (ROI) calculation can be negative for a threshold IT investment that is critical for the survival of an organization. Strategic IT
The authors would like to thank the anonymous reviewers and the Editor for the valuable comments which have helped us to improve the quality of the paper.

Journal of Systems and Information Technology Vol. 12 No. 1, 2010 pp. 37-55 # Emerald Group Publishing Limited 1328-7265 DOI 10.1108/13287261011032643

JSIT 12,1

38

investments cannot be evaluated using tangible criteria alone; there can be other intangible benefits (Weill and Olson, 1989). Without considering intangible benefits, the use of traditional evaluation techniques can lead to wrong investment decisions (Anandarajan and Wen, 1999; Irani, 2002; Irani et al., 1997; Law and Ngai, 2005; Lubbe and Remenyi, 1999). Irani et al. (2002) explain that as organizations are searching for long-term strategic benefits rather than shortterm operational benefits, traditional evaluation techniques can be inappropriate. Irani et al. (2006) reason that if the investment objective of an IT project is to reduce operational costs, then traditional evaluation techniques are sufficient. However, for a project that aims to exploit IT for strategic benefits, a newer approach like balanced scorecard might be a better choice for evaluation purposes. In strategic IT investment evaluation, identifying qualitative, intangible, or hidden value poses a major challenge to business managers. Also, a strategic IT implementation requires complementary organizational and business changes, e.g. business processes redesign, which in turn makes it difficult to segregate IT value from the total value (Brynjolfsson and Hitt, 2000). The different objectives of IT investments further complicate IT evaluation. For example, in a study of IT performance in 33 small-to-medium-sized valve manufacturing firms, Weill (1992) categorized IT investments in terms of management objectives, i.e. strategic, informational, and transactional. He reported that transactional IT contributed significantly to better firm performance, but not in the case of informational IT. However, strategic IT did not prove its worth over a longer term. This could be because strategic advantages of strategic IT would be eroded once competitors have emulated it. He concluded that different types of IT investment served different management objectives and thus exhibited different effects on firm performance. Although organizations are keen to determine the relationship between IT investment and organizational performance, the assessment of IT investment returns is difficult and organizations do not have adequate evaluation frameworks for that purpose. Leem and Kim (2004) claim that past IS evaluation studies suffer from several limitations. First, IS evaluation should be conducted in all stages of IS development life cycle, i.e. from initiation to post implementation. However, past studies only assessed a particular stage. Second, although past studies have been working hard on performance measures and IS-performance relationship, they have not been successful in identifying the best evaluation approach. Third, past studies in general did not discuss business implications of IS evaluation. Berghout and Remenyi (2005), having reviewed some 298 articles presented at the European Conference on IT Evaluation, conclude that future research should try to build a theoretical groundwork of IT evaluation with strong core concepts and better evaluation methodologies. Stressing the importance of IT evaluation, they suggest that the reasons behind lukewarm support among IS practitioners for IS evaluation should be explored further. Although a daunting challenge, there is a strong need to develop underpinning theoretical frameworks for better evaluation of IT investment. The difficulties, problems, issues, and challenges faced by IS researchers, IS specialists, and business managers have been reported in past studies. However, the discussions are disparate, scattered, and in most cases, lack depth. This paper is motivated to fill in this knowledge gap. This paper posits that before a pragmatic IT evaluation framework can be developed, it is necessary to first understand the issues and challenges faced by IS researchers, IS specialists, and business managers in IT evaluation. In order to meet this objective, this paper follows the pointers provided by Webster and Watson (2002) on writing a review paper.

Preliminary selection of articles was conducted by retrieving abstracts of relevant articles, as at May 2009, in three major online research databases, i.e. ProQuest, EBSCOhost, and ScienceDirect. As literature frequently used the terms information systems and information technology interchangeably, the search was conducted using each of these terms together with each of the following keywords: success, performance, value, benefit, evaluation, payoff, productivity, effectiveness, and efficiency. To retrieve the maximum number of abstracts, no restriction was imposed on publication year. However, only full-text articles were considered as they provided detailed information necessary for further analysis. In the first round of review, the abstracts were read carefully to decide which articles were to be retained for the next stage of review. To be retained for the next stage of review, the abstracts should indicate a context of IS evaluation. Some articles were cited in more than one of the three online research databases. Thus, duplicate copies were deleted. Some abstracts that did not provide enough details for a deletion decision were retained for the second round of review. In the second round of review, the full text of individual articles were then downloaded and read. It was decided that only articles that had a specific section about IS evaluation difficulties, problems, issues, and challenges were to be retained. Articles that merely mentioned the reasoning behind the research design of an empirical study, e.g. reasons to focus on a particular stakeholder group or the use of quantitative or qualitative measures, were not included. Articles that quoted other articles heavily were also not included. Instead, the original articles were referred to. In the sections that follow, first a review of the selected articles is provided, and then the eight IT evaluation issues and challenges are identified. The eight issues and challenges are: evaluation scope, evaluation timing, unit of analysis, level of analysis, different perspectives, different dimensions, different measures, and underpinning theoretical frameworks. Next, each of the eight issues and challenges is discussed. This paper concludes with some suggestions on how to improve IT evaluation practices. 2. Issues and challenges In ex-ante or ex-post IT evaluation, stakeholder perspectives, evaluation dimensions, and evaluation measures still remain a central issue. As multiple stakeholders are involved in the planning, design, implementation, and use of IT, the different perspectives of stakeholders complicate IT evaluation by introducing a diverse range of dimensions and measures (Agourram, 2009; Chou et al., 2006; Jurison, 1996; Klecun and Cornford, 2005). For example, Bajwa et al. (1998) describe that to measure executive IS success is a difficult task due to its multidimensionality and the different viewpoints of the evaluators. Bernroider (2008) adds that enterprise resource planning (ERP) success is multi-dimensional, and frequently, there are multiple stakeholders in evaluating ERP success. Hamilton and Chervany (1981a, b) rationalize how stakeholder differences can influence IS evaluation objectives and measures. Grover et al. (1996) raise a question about the perspective from which IS effectiveness is judged. In assessing IS effectiveness, the different perspectives of individual stakeholders, e.g. users, management, IS personnel, and external stakeholders, can be considered. Thus, it is important to first define the perspective explicitly, as different stakeholders would have different views about IS effectiveness. Hakkinen and Hilmola (2008) observe that different perspectives and different dimensions have added complexity to IS evaluation a reason why a one-for-all evaluation framework is not available. Palvia et al. (2001) share the same opinion that post-implementation evaluation in general faces disparity in the areas of stakeholder

Information technology evaluation 39

JSIT 12,1

40

perspectives and evaluation dimensions. Mirani and Lederer (1998) recognize that there is no single best approach to measure organizational benefits of IS projects. It would be difficult as IS effectiveness involves several aspects. Relating IS effectiveness to organizational effectiveness, where there are similarities between the two, they identify the issues of different perspectives and different dimensions in evaluating IS effectiveness. There are always multiple evaluation dimensions to consider, e.g. business, user, and technology, not only financial (Irani et al., 2006; Love et al., 2004, 2005). Having too many dimensions in turn introduces a large number of diverse measures which complicates IT evaluation further (Devaraj and Kohli, 2003). There is the question of scale validity and reliability (Devaraj and Kohli, 2000; Jurison, 1996; Mukhopadhyay et al., 1995; Weill, 1992). The use of quantitative and qualitative measures has been a concern (Hamilton and Chervany, 1981a). Qualitative measures have not received enough attention in IT evaluation. When qualitative measures are used, there is the issue of subjectivity due to the different opinions of the evaluators (Anandarajan and Wen, 1999). Skok et al. (2001) comment that, in measuring IS success, having too many measures tends to obscure meaningful comparison across studies and creates confusion about the meaning of each measure. Gunasekaran et al. (2006) highlight several issues related to measures, i.e. no common definition of organizational performance measures and metrics; no clear distinction among strategic, tactical and operational performance measures and metrics; and the difficulty in identifying and measuring intangible and non-financial measures. As identified by Gable et al. (2008), several weaknesses of past IS success studies are related to stakeholder perspectives, evaluation dimensions, and evaluation measures. First, many studies did not explain their choice of constructs and measures. Second, the validity of the constructs seemed questionable. Third, with just a few selected constructs, the completeness of the evaluation model also seemed questionable. Fourth, past studies used outdated evaluation instruments and measures that did not reflect contemporary technology development. These studies relied heavily on financial measures and failed to consider intangible benefits. Last, although IS involve multiple stakeholders, past studies focused on only one perspective. The scope of evaluation brings another major issue. Without defining the evaluation scope, there are implications for the evaluation dimensions and measures. The scope can cover a specific system or a total organizational systems portfolio. By defining the scope, relevant measures can then be identified (Grover et al., 1996). However, evaluation scope can be difficult to define when IT is part of the organizational change (Klecun and Cornford, 2005). Devaraj and Kohli (2003) suggest that one of the difficulties in establishing the link between IT and organizational performance can be attributed to the summative nature of IT impact. This makes it very difficult to pinpoint exactly the impact of individual technology. Skok et al. (2001) also highlight the problems of attributing causality and isolating the effects derived from an IS implementation. In addition, inconsistent definition of IT and disparate types of IT make scope definition a challenge (Palvia et al., 2001; Weill, 1992). Given the different types of IS and the difficulty in separating IS from work system, it is not that straightforward to define IS success (Agourram, 2009). Evaluation objectives have also implications for evaluation scope. As explained by Hamilton and Chervany (1981a), unclear evaluation objectives and measures contribute to the difficulty in evaluating IS effectiveness. They add that during the systems development cycle, external environment or organizational learning processes can cause evaluation objectives and measures to change.

Past literature has also provided other reasons why previous studies have produced conflicting findings about IT benefits or have not been able to illustrate affirmatively the relationship between IT investment and firm performance. Evaluation timing is among those highlighted (Bernroider, 2008; Devaraj and Kohli, 2000; Skok et al., 2001). Time lag (Jurison, 1996) and cross-sectional data (Weill, 1992) could have accounted for the differences in past studies. Differences in research design, e.g. snapshot vs. longitudinal data, can also make comparison across studies difficult (Devaraj and Kohli, 2003). There is also a need to specify the unit of analysis, e.g. individual, business unit, organization, and nation (Jurison, 1996; Weill, 1992). Different units of analysis have implications for evaluation measures. For example, management is interested in understanding how IS contributes to organizational performance, whereas users are more interested in IS use and user satisfaction (Grover et al., 1996). Besides unit of analysis, level of analysis needs particular attention as well (Devaraj and Kohli, 2000; Hakkinen and Hilmola, 2008). It is essential to identify the level of analysis, e.g. individual, system, strategic business unit, firm, or industry, that the evaluation measures target (Devaraj and Kohli, 2003; Mirani and Lederer, 1998). Last but not least, there is the issue of availability of underpinning theoretical frameworks and models (Agourram, 2009; Gable et al., 2008; Mukhopadhyay et al., 1995; Palvia et al., 2001; Weill, 1992). Gunasekaran et al. (2006) specifically point out that there are no theoretical frameworks available to measure impact of IT on organizational performance or to help guide selection of evaluation tools or techniques. There is also a lack of validated IS evaluation models. As explained by Stockdale et al. (2006) and Stockdale and Standing (2006), an IS evaluation should not just cover the financial and technical aspects, but also the social aspect. To this end, an IS evaluation framework should consider three factors, i.e. context, content, and process. Context refers to the evaluation purposes, stakeholders, and micro and macro environments of the organization. Content refers to the evaluation subject and relevant evaluation criteria. Process refers to the execution of the evaluation, e.g. time frame, tools, and techniques. Mohd. Yusof et al. (2008) share a similar point of view, highlighting that an evaluation framework should involve the aspects of technology, human, and organization. Such a framework allows a comprehensive evaluation to answer the questions of why evaluation objectives, who stakeholder perspectives, when a point in time when the evaluation is conducted, what evaluation dimension, and how evaluation methods. To conclude, past literature has provided clues to the IT evaluation issues and challenges faced by IS researchers, IS practitioners, and business managers. Table I provides a summary of the IT evaluation issues and challenges. In order of frequency, they are: different perspectives, different measures, different dimensions, evaluation scope, underpinning theoretical frameworks, evaluation timing, level of analysis, and unit of analysis. It is evident that evaluation perspectives, dimensions, and measures still remain a central issue in IT evaluation. Each of the IT evaluation issues and challenges is discussed next. 2.1 Evaluation scope One limitation of previous IT business value studies is the treatment of IT as an aggregate system where different types of systems are grouped together. When aggregated, the benefits gained from a well-designed system may be offset by a poorlydesigned system (Mukhopadhyay et al., 1995). In general, organizations have different definitions of IT, but the definitions tend to cover everything about IT. A broad

Information technology evaluation 41

JSIT 12,1

Authors Agourram (2009) Anandarajan and Wen (1999) Bajwa et al. (1998) Bernroider (2008) Chou et al. (2006) Devaraj and Kohli (2000) Devaraj and Kohli (2003) Gable et al. (2008) Grover et al. (1996) Gunasekaran et al. (2006) Hakkinen and Hilmola (2008) Hamilton and Chervany (1981a) Hamilton and Chervany (1981b) Irani et al. (2006) Jurison (1996) Klecun and Cornford (2005) Mirani and Lederer (1998) Mohd. Yusof et al. (2008) Mukhopadhyay et al. (1995) Palvia et al. (2001) Skok et al. (2001) Stockdale et al. (2006) Weill (1992)

1 p

Issues and challenges 4 5 6 p p p p p p p p p p p p p p p p p p p p

7 p

8 p

p p p p p p p p p p p p p p p p p p p

p p p p p p p p p p p p p p p p p p p p p p p p p p p

42

p p

Table I. Summary of IT evaluation issues and challenges

Note: 1 evaluation scope; 2 evaluation timing; 3 unit of analysis; 4 level of analysis; 5 different perspectives; 6 different dimensions; 7 different measures; 8 underpinning theoretical frameworks

definition causes measurement problem (Weill and Olson, 1989). Oz (2005) agrees that in studying the value of IT, as IT covers a broad range of elements, e.g. hardware, software, telecommunications, and people, the scope of IT should be defined specifically. Following that, only relevant measures can be identified to ensure that the right thing is measured. To perform an effective evaluation, it is necessary to first define what to evaluate before deciding how to evaluate. Hamilton and Chervany (1981a) and Ammenwerth et al. (2003) suggest having clear evaluation objectives to help define the evaluation scope. Involving disparate systems in an IS performance study makes comparison across studies difficult (Cronholm and Goldkuhl, 2003). The type of systems under investigation must be specified in order to make a meaningful assessment (Seddon et al., 1999). Some researchers did that. For example, in a survey about IT evaluation approaches among 80 senior IT managers in medium to large European and US organizations, Seddon et al. (2002) segregated three types of IT: overall IT portfolio, individual application project, and IT function. Pitt et al. (1995) suggest that to measure user perception of service quality of IS function, the focus can be on the IS department and a specific IS. Ragowsky et al. (2000) argue that evaluation should not be for the entire IS applications portfolio. Instead, it should be for a specific IS application for one simple reason benefits gained from a specific IS application are more precise and detailed than a generic set of benefits. To support their argument, they conducted a

research involving a sample of 310 senior managers in Israeli manufacturing organizations. For the entire IS applications portfolio, they found no correlation between organizational characteristics and IS benefits. On the other hand, for specific manufacturing applications, they found a significant positive correlation. Different organizations adopt IT differently in terms of use and diffusion. This makes comparison of IT value across organizations difficult. When IT brings strategic benefits, it adds further difficulties to the measurement issue (Brady et al., 1999). As different IT investments meet different organizational objectives, it is necessary to evaluate an IT investment in terms of the objectives (Weill and Olson, 1989). For example, applying the Strategic Grid model, where different IS were classified into four groups: strategic, turnaround, factory, and support, Premkumar and King (1992) concluded that IS in the strategic and factory groups contributed to better organizational performance than those in the support and turnaround groups. Also, an IT implementation sometimes crosses over different functional areas and this makes it very difficult to account for the benefits and costs of a system for a specific functional area (Ballantine and Stray, 1998). As IT becomes an indispensable part of organizational work systems, it can be even more difficult to segregate value derived from IT or work systems (Agourram, 2009; Devaraj and Kohli, 2003; Skok et al., 2001). Organizational complements accompanying an IT implementation can bring benefits that are inseparable from the technology (Klecun and Cornford, 2005). 2.2 Evaluation timing An IT evaluation can take an ex-ante or ex-post approach. Mohd. Yusof et al. (2008) propose that an evaluation can also be conducted in individual phases of the system development life cycle. In most cases, the timing of an evaluation can produce different results. There might be a performance lag between the introduction of an IS and the time the system begins performing (Mukhopadhyay et al., 1995). A new IT implementation does not necessarily bring immediate benefits. There is a time lag before the benefits are realized. In addition, the measurement of benefits can be complicated by other complementary organizational improvements. To identify the presence of time lag, Devaraj and Kohli (2000) examined three-year data collected from eight hospitals that had implemented decision support systems (DSS). The longitudinal study provided clear evidence that DSS did help improve organizational profitability after three months or so after implementation. At a US high-tech manufacturer, McAfee (2002) studied the performance of the customer order fulfillment function of an enterprise IS. He observed that performance in fact dropped right after the adoption, but improved after several months of use. As explained by Lee and Kim (2006) and Stockdale et al. (2006), a learning curve can set in and delay realization of positive results. Bernroider (2008) observe that unintended benefits might emerge when IT is in use or becomes mature. 2.3 Unit of analysis It has been suggested that inappropriate unit of analysis in IT evaluation studies can be an issue (Grover et al., 1996; Jurison, 1996; Weill, 1992). Unit of analysis can be determined largely by the evaluation purpose and context (Grover et al., 1996). Most IT productivity studies treat nations as the unit of analysis when examining the overall contribution of IT. At such a high aggregate level, details about benefit differences of individual companies, firm and industry characteristics, and IT success and failure factors would be lost (Hitt and Brynjolfsson, 1996). Brynjolfsson and Yang (1996) posit

Information technology evaluation 43

JSIT 12,1

that national aggregate data does not help reveal details about IT value. Instead, industrial studies help provide more details. Considering this, they recommend that future research should not just study the average effect of IT, but also firm strategy, best practices, and characteristics to understand how IT can be exploited for greater IT value. 2.4 Level of analysis Potential value of IT can happen at system, user, business process, business unit, organizational, and nation levels. For example, Melville et al. (2004) report that IT contributes to both business process performance and organizational performance. Business process performance refers to operational efficiency enhancement associated with specific business processes. Organizational performance refers to the total impact of IT on organizations. Shayo et al. (1999) suggest that, apart from end user, we should also look at different levels of analysis, e.g. group, departmental, and organization, for a comprehensive, integrated IT evaluation model. Kim et al. (1999) mention that system value can be examined at four different levels: system, user, organizational, and strategic. Each level requires different evaluation measures. Noting that past studies have examined IT impact at different levels of analysis, Tangpong (2008) reckons that IT impact should be examined at organizational level because organizations are expecting to see return on their IT investment. Kohli and Grover (2008) agree that IT value should be examined at organizational or inter-organizational level. However, Weill and Olson (1989) advise that IT investment might not be ideally assessed at organizational level. Instead, the linkage between IT investment and organizational performance could be best demonstrated at strategic business unit level. They propose that measures representing different aspects of performance can be carefully selected and matched with the business objectives of each type of IT investment. Cronk and Fitzgerald (1999) stress that to explain a complex construct like IS business value, three levels of IS business value can be considered: systemdependent, user-dependent, and business-dependent. System-dependent level refers to the value added by systems characteristics, e.g. accuracy, response time, downtime, semantic quality, and timeliness. User-dependent level refers to the value added by user characteristics, e.g. user skills and attitudes. Business-dependent level refers to the value added by IS-business alignment, e.g. business goals realization. Davern and Kauffman (2000) support the notion that potential value exists at different levels of analysis, i.e. economy, firm, business process and individual, covering different perspectives and stakeholders. By identifying first the potential value of an IT investment, the extent of value realization can then be measured. With the identification of potential value, recommendations can then be made on how to realize them. As there are multiple levels of analysis, conflicts among stakeholders could happen. Performing an analysis across different levels of analysis would provide a comprehensive picture and thus minimize conflicts. With this approach, the potential value would be in the best interest of the organization, instead of individual stakeholders. Some researchers propose that IT benefits are realized in stages at different organizational levels. For example, in a study to examine how IT infrastructure and organizational contextual factors affect the success of IS implementation, Grover and Segars (1996) operationalize IS success with firm and market-level measures. Barua et al. (1995) suggest that the key to answering the IT value question lies with the measurement of value. Searching for an answer, they adopted a process-oriented methodology to examine the impact of IT on small business units (SBUs) at both

44

intermediate and higher levels. A survey instrument was designed to include 130 variables and collect data from 60 strategic business units of 20 large US and Western European corporations. Analysis was performed for three specific functions: production; marketing, sales and distribution, and innovation. They concluded that IT impacts seen at the lower levels of an organization would escalate into the higher levels. For example, IT had effects on operational level variables, e.g. inventory turnover, which in turn affected higher-level variables, e.g. market share. 2.5 Different perspectives The attempt to measure IS business value is a complicated issue. One of the problems points to the definition of value which differs individually from one person to another as there are different individual opinions, views, and background. Individual perspectives would subsequently affect the choice of relevant measures. Thus, in assessing IS business value, it is necessary to first define the perspective. An evaluation could involve such stakeholders as initiators, evaluators, users, and interest parties (Serafeimidis and Smithson, 1996). Measurement of value very often involves different organizational groups, e.g. users and managers, and each group has its own perception of value (Davern and Kauffman, 2000). A common definition of IT value and how it can be measured must be reached among the stakeholders before an evaluation can be successful (Bannister and Remenyi, 2003). Lyytinen and Hirschheim (1987) reckon that different stakeholders have different value set, although sometimes difficult to define, but failing to meet the expectations of a particular stakeholder group is a common cause of IS failure. They identify four types of IS failure: correspondence, process, interaction, and expectation failures. Among the four, expectation failure is believed to have contributed the most to IS failure, given its multidimensional and multi-stakeholders nature. In IS effectiveness evaluation, Hamilton and Chervany (1981b) highlight that the differences in perspectives among different functional groups can lead to conflicts. Subjective perception and different group experiences can influence results of IS evaluation. To conduct an effective assessment of system effectiveness, there is a need to consider multiple viewpoints of evaluation objectives and measures. McAulay et al. (2002) note that using a dissimilar set of criteria, evaluation outcomes can be totally different from systems developers to systems users. Different stakeholders have different perspectives about benefits and risks, and at the same time, they share differences and similarities. Recognizing that different stakeholders and systems settings will call for different measures in assessing IS success, it is common to find research studies involving multiple stakeholders. Remenyi and Sherwood-Smith (1999) describe an evaluation approach which involves the participation of key stakeholders, i.e. users, management, and IT specialists, in the evaluation and decision-making process. The stakeholders are not only involved in the initial decision-making stage but also in the design stage to discuss how the system is to be designed. Seddon et al. (1999), suggesting that evaluation of IS can be performed from three different perspectives: users, developers, and management, propose a two-dimensional, 30-category IS effectiveness matrix which includes different combinations of stakeholders and systems. Fearon and Philip (1999), in a study of the Irish supermarket retail sector, examined performance of EDI from the perspectives of retailers, distributors and suppliers. Kanungo et al. (1999), in a study to examine factors leading to IS effectiveness, interviewed over 120 respondents of different organizational levels, i.e. the CEO, IS manager, and user. In a study of IS quality of expert systems, Palvia et al. (2001) surveyed three stakeholder groups,

Information technology evaluation 45

JSIT 12,1

i.e. users, managers, and systems developers. Knowing that it is difficult to have all perspectives considered, Cronk and Fitzgerald (1999) suggest starting with broad generic business value measures, allowing the IS context in question to decide on the final measures later. 2.6 Different dimensions Traditional evaluation approaches do not take into account the multi-dimensional aspect of performance and focus mainly on financial measures. Although financial measures address the concerns of shareholders, they fail to consider the internal and external stakeholders (Brignall and Ballantine, 1996). Scott (1995) argues that a causal model would better reflect the multi-dimensional nature of IS effectiveness. He suggests using operational performance measures rather than financial measures. This notion of multi-dimension is also supported by DeLone and McLean (1992) in their classic article on IS success, pointing out that a comprehensive evaluation of IS success should include measures from different dimensions. Merely one or two variables are not good enough. A multi-dimensional approach to measuring IS success, which takes into account the interdependencies among measures, is better than a unidimensional approach (Rai et al., 2002). IS success should be evaluated as a multidimensional construct, represented by measures from the usage, satisfaction and decision performance dimensions (Arnold, 1995). Treating system use as a multidimensional construct, Doll and Torkzadeh (1998) propose five dimensions of system use, i.e. problem solving, decision rationalization, horizontal integration, vertical integration, and customer service. In assessing organizational benefits of IS, Mirani and Lederer (1998) classify benefits into three categories in terms of the organizational objectives each IS serves: strategic, informational, and transactional. With the 33 benefits identified from past studies, they conducted an empirical study involving 178 projects to further classify the benefits into three subdimensions within each dimension. Strategic subdimensions identified were competitive advantage, alignment, and customer relations. Informational subdimensions were information access, information quality, and information flexibility. Transactional subdimensions were communications efficiency, systems development efficiency, and business efficiency. Marsh and Flanagan (2000) propose a similar framework to consider IT benefits of three dimensions, i.e. automational, informational, and transformational. Automational refers to efficiency benefits that can be quantified easily. Informational refers to business effectiveness and can be quantified in financial terms. Transformational refers to the effects on business performance. Irani et al. (2002) propose an approach to evaluate new IT projects along four dimensions: strategic, tactical, operational, and financial dimensions. Other dimensions have been identified in past literature. Eskow (1990) proposes that measures of IT value can be considered from three different aspects: performance in terms of project goal fulfillment, quality in terms of customer satisfaction, and productivity in terms of systems efficiency. In a study of a system in Federal Express Corporation, Palvia et al. (1992) observed two types of system impacts. In addition to long-term strategic benefits, the system also delivered specific organizational benefits in four areas: personnel division, management, employees, and extra-organizational relationships. Sohal et al. (2001), in a study to compare performance measurement practices between Australian manufacturing and service industries, grouped 16 measures into five categories: scarcity of stock, productivity, costs, value of products or services, and quality principles. In a study of IS effectiveness in an extended supply

46

chain, Edwards et al. (2001) used three groups of measures: operational support, infrastructure support and decision-making support. As cost benefit analysis does not consider intangible dimensions, it is inappropriate for assessing strategic IT investments. This has prompted business managers to search for better evaluation techniques. If we are able to quantify intangible value, then we would be able to justify a strategic IT investment rather easily (Beamon, 1999). To evaluate IT infrastructure investment, e.g. connectivity and data storage, financial techniques might not give the full picture as intangible benefits such as flexibility could be ignored. Rather, consideration can be given to three dimensions: usage, financial assets, and IT infrastructure flexibility (Kumar, 2004). Willcocks and Lester (1996) propose that evaluation measures be linked to business benefits, covering multiple aspects such as financial, service, learning, and technical, from the strategic to operational levels. These measures can then be used to evaluate performance of IT in different phases of systems development life cycle. Pitt et al. (1995) comment that the product aspect of IS function has been receiving great attention from IS researchers, but the service aspect is largely ignored. In measuring IS effectiveness, the IS service quality dimension is equally important. IS function does not just deliver products but also provide services. They propose that Delone and McLeans IS success model be expanded to include the IS service quality dimension. In view of the issues encountered in post-implementation IS evaluation, Palvia et al. (2001) support the notion that IS quality would be a good surrogate measure for IS success. Adopting a socio-technical approach and drawing evaluation measures from the technological, human and organizational aspects, they introduced a quality assessment instrument which consists of 39 variables to measure IS quality of expert systems in 22 US insurance companies. 2.7 Different measures Researchers have tried different evaluation measures, e.g. user satisfaction, decision quality, profit performance, and stock price, to justify why organizations should make investment in IT (Farbey et al., 1992). Kraemer et al. (1993), in a study on the perceived usefulness of computer-based information, involving 260 public managers of 46 US city governments, concluded that perceived usefulness continued to serve as a good surrogate measure of IS success. When users find the system satisfactory, they would tend to perceive the system successful. Recognizing that different companies may have preferences over performance measures, Premkumar and King (1992) asked respondents to first choose five performance measures they considered important to the organization and then indicate the extent of contribution of IS to each of the five performance measures. They then calculated the weighted average of each performance measure to derive a single figure denoting the performance of IS. The top five performance measures reported were ROI, sales revenue, market share, operating efficiency, and customer satisfaction. Commenting that past literature has not satisfactorily linked IT investment to organizational strategic and economic performance, Mahmood and Mann (1993) attempted to identify such a correlation. They claimed that their study, which included the one hundred firms in the 1989 Computerworld Premier 100 list, was the first to provide such evidence. They reported that a single measure was not good enough to represent the strategic and economic performance of an organization. Instead, multiple measures were needed. Weill (1992) also emphasize that single measure of IS success

Information technology evaluation 47

JSIT 12,1

48

has limited meaning. He suggests that performance measures be grouped in terms of management objectives. As a result of different types of IT investment, e.g. cost center, service center and investment center, measures of multiple dimensions are needed. A simple productive measure, e.g. efficiency, is not adequate to wholly reflect the business contribution of IT. A comprehensive evaluation should examine the purpose and strategy of IT management as well as take into consideration such factors as external environment, strategy, structure, and culture. External and internal organizational factors play another key role in influencing the type of IT investment. Each type of IT investment calls for a different set of performance measures. As we move from cost center to investment center, the nature of measures has also shifted from quantitative to qualitative aspects (Sugumaran and Arogyaswamy, 2003/2004). A review of IT value articles published in four leading management information systems journals from 1993 to 1998 revealed that organizational factors, e.g. goals, strategies, culture, and structure, had influences on the selection of either qualitative and quantitative-based measures (Chan, 2000). Stivers and Joyce (2000) suggest that evaluation measures should reflect the organizational strategy. Fitzgerald (1998) claims that it is inappropriate to evaluate an IT project which aims to improve organizational effectiveness with financial measures. To evaluate the effectiveness of such a project, he proposes a two-stage benefit realization framework a benefit eventually leads to a positive behavior at the next stage. The challenge faced by IS researchers and business managers is to fully capture the benefits in both stages as some benefits might see 100 percent realization. However, as highlighted by Devaraj and Kohli (2000), the involvement of a wide variety of measures in IT benefits studies, although offers breadth and depth, makes comparison across studies difficult. A post-implementation review is often performed with a large number of evaluation measures that are not specific to the type of IS under evaluation. Klein et al. (1997) conducted a study to investigate if measures and their importance varied according to the types of IS. Although past studies have tried different types of measures, suitability and appropriateness of the measures for different types of IS have not been looked into. A study into the underlying structure of the measures helps isolate common measures that are appropriate for each type. This understanding can then be helpful in developing a standard IS evaluation instrument. They found out that there were differences among IS in terms of measures. The importance of each measure varied according to the types and objectives of IS, e.g. decision and organizational impact measures were of high importance for higher-level support functions of DSS and information reporting systems. Universal measures did exist. For example, user impact measures were found to be of about equal importance for all types of IS examined. They suggested that it was possible to develop a standard scale to represent each category of the universal measures. To better understand the construct of IS effectiveness and its measures, Grover et al. (1996), after conducting a review of past literature and taking into consideration three evaluation approaches, i.e. evaluation referent, unit of analysis and evaluation types, propose a framework to classify IS effectiveness measures into six different types: classes I, II, III, IV, V, and VI. Class I refers to infusion measures that depict information quality in terms of completeness, efficiency, and accuracy. Class II refers to market measures that depict market reaction to the introduction of IS. Class III refers to economic, quantitative measures that depict financial and productive effect of IS use. Class IV refers to usage measures that depict the use of IS in terms of ease-of-use and

motivational aspects of use. Class V refers to perceptual measures that depict user attitudes, beliefs, and perceptions about IS. Class VI refers to productivity measures that depict the impact of IS on organizational performance in terms of managerial performance and productivity. After reviewing 212 articles published in eleven IS, technical, and organizational journals, Larsen (2003) proposes a taxonomy which includes twelve categories of organizational IS success factors. In the review, IS success was the dependent variable and success factors were the independent variables. It was concluded that not much progress had been made in the understanding of the dependent and independent variables. Particularly, there were no measures of IS success at the organizational level. He advises that in choosing measures of IS success, researchers should use system and organization-specific measures. The real value of IT becomes apparent when IT helps organizations to be more effective and efficient. 2.8 Underpinning theoretical frameworks Bannister and Remenyi (2000) claim that IT evaluation practices have not advanced much. In making IT investment decisions, very frequently, business managers still follow their instincts or conduct some simple analysis. They give two reasons. First, there is a gap between what is being theorized and practiced. Second, the proposed theories are not complete and hence not useful to business managers. Hallikainen et al. (2002) share several limitations of current evaluation approaches for effective IS evaluation: limited use or absence of methods, unrealistic assumptions, unpractical, and lack of underlying theoretical frameworks. They are of the opinion that a good performance measurement approach should be supported by an underlying theoretical framework. Other researchers have pointed out the same. For example, Mahmood and Mann (1993) agree that past studies largely ignore the use of a conceptual framework in evaluating IT investment and organizational performance. Mukhopadhyay and Cooper (1993) stress the need for a theoretical underpinning to help assess business value of IS. Cronk and Fitzgerald (1999) mention that a major weakness of measurement instruments is the lack of construct validity. Cooper and Quinn (1993) also suggest that a theory is needed in evaluating IS effectiveness. To help develop such a theory, they propose the use of Competing Values Framework which describes the information processing capability of IS. They reckon that effective IS can help organizations achieve organizational effectiveness. Renkema and Berghout (1997) observe that evaluation techniques need further validation. They report several observations. First, non-financial evaluation techniques are not supported by theory. Even if there is an attempt to do so, only case studies are used. In this case, evaluation measures involved in these studies are rather superficial. Second, there is a heavy focus on the measures but not on the evaluation process. Third, when nonfinancial measures are used, they are not easy to be applied. Arnold (1995) highlights that the use of surrogate measures in evaluating IS success might not reflect the real situation. There is the question of construct validity. Stockdale and Standing (2006) suggest that the use of validated measures helps add to the body of knowledge. New measures are also necessary to reflect the increasingly intangible nature of strategic system. 3. Conclusion Having identified the eight issues and challenges, this paper concludes with several suggestions for further improvement in IT evaluation practices. It is clear from the review that stakeholder perspectives, evaluation dimensions and evaluation measures

Information technology evaluation 49

JSIT 12,1

50

occupy the top three positions in the issue and challenge list. However, IS researchers, IS specialists, and business managers should recognize that as evaluation scope has overall implications for the other seven issues, it is essential to first define the evaluation scope. To define the scope, several questions need to be answered: What are the evaluation objectives? What type of IT, e.g. a single IT application, a type of IT, or all IT applications (Seddon et al., 1999)? What are the intents and uses of IT, e.g. operational, tactical, or strategic? What is the depth and breadth of evaluation? What are the evaluation constraints, e.g. time, cost, and experience? A clearly-defined evaluation scope helps a great deal in providing pointers for the issues of stakeholder perspectives, evaluation dimensions and evaluation measures. Start first with the stakeholder perspectives. Again, several questions need to be answered: How many perspectives and why, e.g. single or multiple? Who are the stakeholders? Is there an order of priority? Next, questions about the evaluation dimensions: How many dimensions? What are the dimensions? Do these dimensions belong to similar or diverse groups? Are these dimensions associated with the stakeholders identified earlier? Then, questions about the evaluation measures: How many measures within each of the individual dimensions identified earlier? What are the measures? What type of measures, e.g. quantitative, qualitative, operational, tactical, or strategic? With the top four issues considered, IS researchers, IS specialists, and business managers can then move on to deliberate evaluation timing, level of analysis, and unit of analysis, and underpinning theoretical frameworks. A well-defined evaluation scope can be useful as it presents a context to ease decision-making. An IT evaluation can be ex-ante, ex-post, or at any stage of the system development life cycle so long as the evaluation fulfils the agreed evaluation objectives. Equally important, a correct decision has to be made on the level of analysis and unit of analysis for an effective IT evaluation. Lastly, an underpinning theoretical framework or model should be adopted whenever possible.
References Agourram, H. (2009), Defining information system success in Germany, International Journal of Information Management, Vol. 29 No. 2, pp. 129-37. Ammenwerth, E., Graber, S., Herrmann, G., Burkle, T. and Konig, J. (2003), Evaluation of health information systems problems and challenges, International Journal of Medical Informatics, Vol. 71 Nos 2/3, pp. 125-35. Anandarajan, A. and Wen, H. (1999), Evaluation of information technology investments, Management Decision, Vol. 37 No. 4, pp. 329-37. Arnold, V. (1995), Discussion of an experimental evaluation of measurements of information system effectiveness, Journal of Information Systems, Vol. 9 No. 2, pp. 85-91. Bajwa, D.S., Rai, A. and Brennan, I. (1998), Key antecedents of executive information system success: a path analytic approach, Decision Support Systems, Vol. 22 No. 1, pp. 31-43. Ballantine, J. and Stray, S. (1998), Financial appraisal and the IS/IT investment decision making process, Journal of Information Technology, Vol. 13 No. 1, pp. 3-14. Bannister, F. and Remenyi, D. (2000), Acts of faith: instinct, value and IT investment decisions, Journal of Information Technology, Vol. 15 No. 3, pp. 231-41. Bannister, F. and Remenyi, D. (2003), The societal value of ICT: first steps towards an evaluation framework, Electronic Journal of Information Systems Evaluation, Vol. 6 No. 2, pp. 197-206. Barua, A., Kriebel, C.H. and Mukhopadhyay, T. (1995), Information technologies and business value: an analytic and empirical investigation, Information Systems Research, Vol. 6 No. 1, pp. 3-23.

Beamon, B.M. (1999), Measuring supply chain performance, International Journal of Operations & Production Management, Vol. 19 No. 3, pp. 275-92. Berghout, E. and Remenyi, D. (2005), The eleven years of the European conference on IT evaluation: retrospectives and perspectives for possible future research, Electronic Journal of Information Systems Evaluation, Vol. 8 No. 2, pp. 81-98. Bernroider, E.W.N. (2008), IT governance for enterprise resource planning supported by the DeLone-McLean model of information systems success, Information & Management, Vol. 45 No. 5, pp. 257-69. Brady, M., Saren, M. and Tzokas, N. (1999), The impact of IT on marketing: an evaluation, Management Decision, Vol. 37 No. 10, pp. 758-67. Brignall, S. and Ballantine, J. (1996), Performance measurement in service businesses revisited, International Journal of Service Industry Management, Vol. 7 No. 1, pp. 6-31. Brynjolfsson, E. and Hitt, L.M. (2000), Beyond computation: information technology, organizational transformation and business performance, The Journal of Economic Perspectives, Vol. 14 No. 4, pp. 23-48. Brynjolfsson, E. and Yang, S. (1996), Information technology and productivity: a review of the literature, Advances in Computers, Vol. 43 No. 1, pp. 179-214. Chan, Y. (2000), IT value: the great divide between qualitative and quantitative and individual and organizational measures, Journal of Management Information Systems, Vol. 16 No. 4, pp. 225-61. Chou, T.Y., Chou, S.T. and Tzeng, G.H. (2006), Evaluating IT/IS investments: A fuzzy multicriteria decision model approach, European Journal of Operational Research, Vol. 173 No. 3, pp. 1026-46. Cooper, R.B. and Quinn, R.E. (1993), Implications of the competing values framework for management information systems, Human Resource Management, Vol. 32 No. 1, pp. 175-201. Cronholm, S. and Goldkuhl, G. (2003), Strategies for information systems evaluation: six generic types, Electronic Journal of Information Systems Evaluation, Vol. 6 No. 2, pp. 65-74. Cronk, M.C. and Fitzgerald, E.P. (1999), Understanding IS business value: derivation of dimensions, Logistics Information Management, Vol. 12 Nos 1/2, pp. 40-9. Davern, M.J. and Kauffman, R.J. (2000), Discovering potential and realizing value from information technology investments, Journal of Management Information Systems, Vol. 16 No. 4, pp. 121-43. DeLone, W. and McLean, E. (1992), Information systems success: the quest for the dependent variable, Information Systems Research, Vol. 3 No. 1, pp. 60-95. Devaraj, S. and Kohli, R. (2000), Information technology payoff in the health-care industry: a longitudinal study, Journal of Management Information Systems, Vol. 16 No. 4, pp. 41-67. Devaraj, S. and Kohli, R. (2003), Performance impacts of information technology: is actual usage the missing link?, Management Science, Vol. 49 No. 3, pp. 273-89. Doll, W.J. and Torkzadeh, G. (1998), Developing a multidimensional measure of system-use in an organizational context, Information & Management, Vol. 33 No. 4, pp. 171-85. Edwards, P., Peters, M. and Sharman, G. (2001), The effectiveness of information systems in supporting the extended supply chain, Journal of Business Logistics, Vol. 22 No. 1, pp. 1-28. Eskow, D. (1990), Measuring performance is key to finding value of IS, PC Week, Vol. 7 No. 22, p. 160. Farbey, B., Land, F. and Targett, D. (1992), Evaluating investments in IT, Journal of Information Technology, Vol. 7 No. 2, pp. 109-22. Fearon, C. and Philip, G. (1999), An empirical study of the use of EDI in supermarket chains using a new conceptual framework, Journal of Information Technology, Vol. 14 No. 1, pp. 3-21.

Information technology evaluation 51

JSIT 12,1

52

Fitzgerald, G. (1998), Evaluating information systems projects: a multidimensional approach, Journal of Information Technology, Vol. 13 No. 1, pp. 15-27. Gable, G.G., Sedera, D. and Chan, T. (2008), Re-conceptualizing information system success: the IS-impact measurement model, Journal of the Association for Information Systems, Vol. 9 No. 7, pp. 377-408. Grover, V. and Segars, A.H. (1996), The relationship between organizational characteristics and information system structure: an international survey, International Journal of Information Management, Vol. 16 No. 1, pp. 9-25. Grover, V., Jeong, S.R. and Segars, A.H. (1996), Information systems effectiveness: the construct space and patterns of application, Information & Management, Vol. 31 No. 4, pp. 177-91. Gunasekaran, A., Ngai, E.W.T. and McGaughey, R.E. (2006), Information technology and systems justification: a review for research and applications, European Journal of Operational Research, Vol. 173 No. 3, pp. 957-83. Hakkinen, L. and Hilmola, O. (2008), ERP evaluation during the shakedown phase: lessons from an after-sales division, Information Systems Journal, Vol. 18 No. 1, pp. 73-100. Hallikainen, P., Kivijarvi, H. and Nurmimaki, K. (2002), Evaluating strategic IT investments: an assessment of investment alternatives for a web content management system, Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS02) in Big Island, HI, pp. 2977-86. Hamilton, S. and Chervany, N. (1981a), Evaluating information system effectiveness part I: comparing evaluation approaches, MIS Quarterly, Vol. 5 No. 3, pp. 55-69. Hamilton, S. and Chervany, N. (1981b), Evaluating information system effectiveness part II: comparing evaluator viewpoints, MIS Quarterly, Vol. 5 No. 4, pp. 79-86. Hitt, L.M. and Brynjolfsson, E. (1996), Productivity, business profitability, and consumer surplus: three different measures of information technology value, MIS Quarterly, Vol. 20 No. 2, pp. 121-42. Irani, Z. (2002), Information systems evaluation: navigating through the problem domain, Information & Management, Vol. 40 No. 1, pp. 11-24. Irani, Z. and Love, P.E.D. (2000/2001), The propagation of technology management taxonomies for evaluating investments in information systems, Journal of Management Information Systems, Vol. 17 No. 3, pp. 161-77. Irani, Z., Ezingeard, J.N. and Grieve, R.J. (1997), Integrating the costs of a manufacturing IT/IS infrastructure into the investment decision-making process, Technovation, Vol. 17 Nos 11/12, pp. 695-706. Irani, Z., Ghoneim, A. and Love, P.E.D. (2006), Evaluating cost taxonomies for information systems management, European Journal of Operational Research, Vol. 173 No. 3, pp. 1103-22. Irani, Z., Sharif, A., Love, P.E.D. and Kahraman, C. (2002), Applying concepts of fuzzy cognitive mapping to model: the IT/IS investment evaluation process, International Journal of Production Economics, Vol. 75 Nos 1/2, pp. 199-211. Jurison, J. (1996), The temporal nature of is benefits: a longitudinal study, Information & Management, Vol. 30 No. 2, pp. 75-9. Kanungo, S., Duda, S. and Srinivas, Y. (1999), A structured model for evaluating information systems effectiveness, Systems Research and Behavioral Science, Vol. 16 No. 6, pp. 495-518. Kim, C.S., Peterson, D. and Meinert, D. (1999), Students perceptions on information systems success, The Journal of Computer Information Systems, Vol. 39 No. 3, pp. 68-72. Klecun, E. and Cornford, T. (2005), A critical approach to evaluation, European Journal of Information Systems, Vol. 14 No. 3, pp. 229-43.

Klein, G., Jiang, J.J. and Balloun, J. (1997), Information system evaluation by system typology, Journal of Systems and Software, Vol. 37 No. 3, pp. 181-6. Kohli, R. and Grover, V. (2008), Business value of IT: an essay on expanding research directions to keep up with the times, Journal of the Association for Information Systems, Vol. 9 No. 1, pp. 23-39. Kraemer, K.L., Danziger, J.N., Dunkle, D.E. and King, J.L. (1993), The usefulness of computerbased information to public managers, MIS Quarterly, Vol. 17 No. 2, pp. 129-48. Kumar, R.L. (2004), A framework for assessing the business value of information technology infrastructures, Journal of Management Information Systems, Vol. 21 No. 2, pp. 11-32. Larsen, K.R.T. (2003), A taxonomy of antecedents of information systems success: variable analysis studies, Journal of Management Information Systems, Vol. 20 No. 2, pp. 169-246. Law, C.C.H. and Ngai, E.W.T. (2005), IT business value research: a critical review and research agenda, International Journal of Enterprise Information Systems, Vol. 1 No. 3, pp. 35-55. Lee, S. and Kim, S. (2006), A lag effect of IT investment on firm performance, Information Resources Management Journal, Vol. 19 No. 1, pp. 43-69. Leem, C.S. and Kim, I. (2004), An integrated evaluation system based on the continuous improvement model of IS performance, Industrial Management & Data Systems, Vol. 104 Nos 1/2, pp. 115-28. Love, P.E.D., Ghoneim, A. and Irani, Z. (2004), Information technology evaluation: classifying indirect costs using the structured case method, Journal of Enterprise Information Management, Vol. 17 No. 4, pp. 312-25. Love, P.E.D., Irani, Z. and Edwards, D.J. (2005), Researching the investment of information technology in construction: an examination of evaluation practices, Automation in Construction, Vol. 14 No. 4, pp. 569-82. Lubbe, S. and Remenyi, D. (1999), Management of information technology evaluation the development of a managerial thesis, Logistics Information Management, Vol. 12 Nos 1/2, pp. 145-56. Lyytinen, K. and Hirschheim, R. (1987), Information systems failures: a survey and classification of the empirical literature, Oxford Surveys in Information Technology, Vol. 4, pp. 257-309. McAfee, A. (2002), The impact of enterprise information technology adoption on operational performance: an empirical investigation, Production and Operations Management, Vol. 11 No. 1, pp. 33-53. McAulay, L., Doherty, N. and Keval, N. (2002), The stakeholder dimension in information systems evaluation, Journal of Information Technology, Vol. 17 No. 4, pp. 241-55. Mahmood, M.A. and Mann, G.J. (1993), Measuring the organizational impact of information technology investment: an exploratory study, Journal of Management Information Systems, Vol. 10 No. 1, pp. 97-122. Marsh, L. and Flanagan, R. (2000), Measuring the costs and benefits of information technology in construction, Engineering Construction & Architectural Management, Vol. 7 No. 4, pp. 423-35. Melville, N., Kraemer, K. and Gurbaxani, V. (2004), Review: information technology and organizational performance: an integrative model of IT business value, MIS Quarterly, Vol. 28 No. 2, pp. 283-322. Mirani, R. and Lederer, A. (1998), An instrument for assessing the organizational benefits of IS projects, Decision Sciences, Vol. 29 No. 4, pp. 803-38. Mohd. Yusof, M., Papazafeiropoulou, A., Paul, R.J. and Stergioulas, L.K. (2008), Investigating evaluation frameworks for health information systems, International Journal of Medical Informatics, Vol. 77 No. 6, pp. 377-85.

Information technology evaluation 53

JSIT 12,1

54

Mukhopadhyay, T. and Cooper, R.B. (1993), A microeconomic production assessment of the business value of management information systems: the case of inventory control, Journal of Management Information Systems, Vol. 10 No. 1, pp. 33-56. Mukhopadhyay, T., Kekre, S. and Kalathur, S. (1995), Business value of information technology: a study of electronic data interchange, MIS Quarterly, Vol. 19 No. 2, pp. 137-56. Oz, E. (2005), Information technology productivity: in search of a definite observation, Information & Management, Vol. 42 No. 6, pp. 789-98. Palvia, P., Perkins, J. and Zeltmann, S. (1992), The PRISM system: a key to organizational effectiveness at Federal Express Corporation, MIS Quarterly, Vol. 16 No. 3, pp. 277-92. Palvia, S.C., Sharma, R.S. and Conrath, D.W. (2001), A socio-technical framework for quality assessment of computer information systems, Industrial Management & Data Systems, Vol. 101 Nos 5/6, pp. 237-51. Pitt, L.F., Watson, R.T. and Kavan, C.B. (1995), Service quality: a measure of information systems effectiveness, MIS Quarterly, Vol. 19 No. 2, pp. 173-85. Premkumar, G. and King, W.R. (1992), An empirical assessment of information systems planning and the role of information systems in organizations, Journal of Management Information Systems, Vol. 9 No. 2, pp. 99-126. Ragowsky, A., Ahituv, N. and Neumann, S. (2000), The benefits of using information systems, Communications of the ACM, Vol. 43 No. 11es, pp. 303-11. Rai, A., Lang, S.S. and Welker, R.B. (2002), Assessing the validity of IS success models: an empirical test and theoretical analysis, Information Systems Research, Vol. 13 No. 1, pp. 50-69. Remenyi, D. and Sherwood-Smith, M. (1999), Maximise information systems value by continuous participative evaluation, Logistics Information Management, Vol. 12 Nos 1/2, pp. 14-31. Renkema, T.J.W. and Berghout, E.W. (1997), Methodologies for information systems investment evaluation at the proposal stage: a comparative review, Information and Software Technology, Vol. 39 No. 1, pp. 1-13. Scott, J.E. (1995), The measurement of information systems effectiveness: evaluating a measuring instrument, ACM SIGMIS Database, Vol. 26 No. 1, pp. 43-61. Seddon, P., Staples, S., Patnayakuni, R. and Bowtell, M. (1999), Dimensions of information systems success, Communications of the AIS, Vol. 2 No. 20, pp. 1-39. Seddon, P.B., Graeser, V. and Willcocks, L.P. (2002), Measuring organizational IS effectiveness: an overview and update of senior management perspectives, Database for Advances in Information Systems, Vol. 33 No. 2, pp. 11-28. Serafeimidis, V. and Smithson, S. (1996), The management of change for information systems evaluation practice: experience from a case study, International Journal of Information Management, Vol. 16 No. 3, pp. 205-17. Shayo, C., Guthrie, R. and Igbaria, M. (1999), Exploring the measurement of end user computing success, Journal of End User Computing, Vol. 11 No. 1, pp. 5-14. Skok, W., Kophamel, A. and Richardson, I. (2001), Diagnosing information systems success: importance-performance maps in the health club industry, Information & Management, Vol. 38 No. 7, pp. 409-19. Sohal, A.S., Moss, S. and Ng, L. (2001), Comparing IT success in manufacturing and service industries, International Journal of Operations & Production Management, Vol. 21 Nos 1/2, pp. 30-45. Stivers, B.P. and Joyce, T. (2000), Building a balanced performance management system, SAM Advanced Management Journal, Vol. 65 No. 2, pp. 22-9.

Stockdale, R. and Standing, C. (2006), An interpretive approach to evaluating information systems: a content, context, process framework, European Journal of Operational Research, Vol. 173 No. 3, pp. 1090-102. Stockdale, R., Standing, C. and Love, P.E.D. (2006), Propagation of a parsimonious framework for evaluating information systems in construction, Automation in Construction, Vol. 15 No. 6, pp. 729-36. Sugumaran, V. and Arogyaswamy, B. (2003/2004), Measuring IT performance: contingency variables and value modes, The Journal of Computer Information Systems, Vol. 44 No. 2, pp. 79-86. Tangpong, C. (2008), IT-performance paradox revisited: resource-based and prisoners dilemma perspectives, Journal of Applied Management and Entrepreneurship, Vol. 13 No. 1, pp. 35-49. Webster, J. and Watson, R.T. (2002), Analyzing the past to prepare for the future: writing a literature review, MIS Quarterly, Vol. 26 No. 2, pp. xiii-xxiii. Weill, P. (1992), The relationship between investment in information technology and firm performance: a study of the valve manufacturing sector, Information Systems Research, Vol. 3 No. 4, pp. 307-33. Weill, P. and Olson, M.H. (1989), Managing investment in information technology: mini case examples and implications, MIS Quarterly, Vol. 13 No. 1, pp. 3-17. Willcocks, L. and Lester, S. (1996), Beyond the IT productivity paradox, European Management Journal, Vol. 14 No. 3, pp. 279-90. Further reading Dasgupta, S., Sarkis, J. and Talluri, S. (1999), Influence of information technology investment on firm productivity: a cross-sectional study, Logistics Information Management, Vol. 12 Nos 1/2, pp. 120-9. Schniederjans, M.J. and Hamaker, J.L. (2003), A new strategic information technology investment model, Management Decision, Vol. 41 Nos 1/2, pp. 8-17. Thong, J.Y.L. and Yap, C.S. (1996), Information systems effectiveness: a user satisfaction approach, Information Processing & Management, Vol. 32 No. 5, pp. 601-10. Corresponding author Govindan Marthandan can be contacted at: marthandan@mmu.edu.my

Information technology evaluation 55

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like