You are on page 1of 178

KNOWLEDGE MANAGEMENT AND INFORMATION TECHNOLOGY IN CUSTOMER SUPPORT: A QUANTITATIVE ASSESSMENT OF THE U.S.

NAVYS DISTANCE SUPPORT PROGRAM by Kevin Dermit OMalley JOHN KLOCINSKI, Ph.D., Faculty Mentor and Chair JOSEPH R. AVELLA, Ph.D., Committee Member JASLEEN BOMBRA SOBTI, Ph.D., Committee Member William A. Reed, Ph.D., Acting Dean, School of Business & Technology

A Dissertation Presented in Partial Fulfillment Of the Requirements for the Degree Doctor of Philosophy

Capella University December 2010

UMI Number: 3432510

All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion.

UMI 3432510 Copyright 2010 by ProQuest LLC. All rights reserved. This edition of the work is protected against unauthorized copying under Title 17, United States Code.

ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, MI 48106-1346

Kevin OMalley, 2010

Abstract Centralized customer support is an established industry method used to improve revenue and profitability. What remains unknown is whether this cost-saving commercial business practice is similarly applicable to the unique, often isolated military environment. This research study statistically tested a theoretical framework for knowledge management and information technology leading ultimately to knowledge transfer. To test the theory, the study used the U.S. Navys Distance Support program, adopted in 2002 in an attempt to reduce excessive spending on labor for technicians and engineers by using advancements in knowledge management and information technology. By taking advantage of centralized call centers with customer support technicians, the Navy contended it could drastically decrease the expensive local support on ships and bases while maintaining an appropriate level of service. The belief was that implementing information technology and leveraging connectivity to provide technical customer support from the shore establishment whenever possible would more efficiently maintain Fleet readiness. Research has shown that increasing the use of centralized customer support reduces dependency on costly local and organic support, but it remained unclear whether the program had achieved this and other long-range objectives. By using metrics that illustrated return on investment, results of this study demonstrated the value of knowledge management in the military by determining for the first time that a worldwide customer support program has succeeded under extreme conditions, realizing the ultimate program goals of increasing efficiency and maintaining level of service.

Acknowledgments I must first thank my mentor and adviser, Dr. John Klocinski, whose motivation, skillful evaluation, and patient guidance ensured that I would complete this demanding project. His confidence in me inspired my success. I also acknowledge my other dissertation committee members, Dr. Jay Avella and Dr. Jasleen Sobti, who provided a thorough review of the manuscript, constructive feedback, and timely words of encouragement along the way. I sincerely appreciate the efforts of Mr. Tarey Gettys, who painstakingly downloaded and sent to me dozens of files of customer support data for the study. I am also indebted to Mr. Jeff Lockemy, who removed bureaucratic obstacles, offered support, and made the research possible. I am extremely grateful to Mr. David Stahr, who offered invaluable insight, reviewed the study on the Navys behalf, and shed so much light on the value of this research. Thanks also to Mr. Ben Oh, who contributed customer satisfaction survey data from Navy users around the world. I must recognize my father, the late Capt. Charles Aloysius OMalley USNR, who raised eight children but always found time to read to me as a child. He convinced me of the power of education. I also owe gratitude to my mother, Mrs. Patricia Jean McDermitt OMalley, whose compassion and values live with us and our children. I finally acknowledge the remarkable patience of our children, Megan, John, and Patrick, who endured the PhD process with a mature understanding of why Daddy was always on the computer. Most of all, I thank my beautiful wife, Rosie, for her inspiration and unending support. Thank you for believing in me and keeping me focused. This is as much your work as mine.

iii

Table of Contents Acknowledgments List of Tables List of Figures CHAPTER 1. INTRODUCTION Introduction to the Problem Background of the Study Statement of the Problem Purpose of the Study Rationale Research Questions Significance of the Study Definition of Terms Assumptions and Limitations Nature of the Study Organization of the Remainder of the Study CHAPTER 2. LITERATURE REVIEW Introduction to the Literature Review Customer Support Knowledge Management Information Technology Summary of the Literature Review CHAPTER 3. METHODOLOGY iv 30 30 42 51 63 1 4 7 9 11 13 16 17 20 21 28 iii vi viii

Research Design Key Variables Setting Sample Instrumentation and Measures Data Collection Data Analysis Validity and Reliability Ethical Considerations CHAPTER 4. RESULTS Introduction Valid Trouble Ticket Cases Data Analysis and Results CHAPTER 5. DISCUSSION, IMPLICATIONS, RECOMMENDATIONS Summary and Discussion of Results Conclusions Limitations Recommendations for Practice Recommendations for Future Research REFERENCES APPENDIX A. CUSTOMER SUPPORT REQUEST FORM APPENDIX B. CUSTOMER SATISFACTION SURVEY APPENDIX C. MOST ACTIVE NAVAL REGIONAL SUPPORT CENTERS v

66 67 73 77 79 80 82 90 93

97 101 103

131 143 146 147 150 153 166 167 168

List of Tables Table 1. Variables Exported from the CRM Database Table 2. Variables Exported from the Customer Satisfaction Database Table 3. Key Variable Relationships Table 4. Estimated Population of Cases and Corresponding Sample Sizes for Each Year Table 5. Support for Alternative Hypotheses 69 71 73 78 98

Table 6. Resolution Method Frequency Distribution By Year for the Population and Random Sample 100 Table 7. Relationship of the Research Questions, Variables, and Statistical Methods Used to Determine Statistical Significance of Correlation 103 Table 8. Resolution Method Frequency Table Table 9. Program Year vs. Resolution Method Frequency Distribution Table 10. Program Year vs. Resolution Method Chi-Square Test Results Table 11. Program Year vs. Resolution Method Directional Measures Table 12. Program Year vs. Resolution Method Symmetric Measures Table 13. Resolution Time Descriptive Data Table 14. Program Year vs. Mean Resolution Time Detail Table 15. Resolution Time Normal Distribution Test Table 16. Program Year vs. Resolution Time Mean Rank Table 17. Program Year vs. Resolution Time Kruskal-Wallis Test Results Table 18. Program Year vs. Resolution Time Correlation Test Results Table 19. Customer Satisfaction Score Frequency Table Table 20. Customer Satisfaction Survey Score Central Tendencies vi 104 105 107 108 109 110 111 113 114 115 115 117 117

Table 21. Customer Satisfaction Survey Score Normal Distribution Test Table 22. Program Year vs. Customer Satisfaction Survey Score Mean Rank Table 23. Program Year vs. Customer Satisfaction Kruskal-Wallis Test Results Table 24. Program Year vs. Customer Satisfaction SomersD Test Results Table 25. Program Year vs. Customer Satisfaction Score Correlation Test Results Table 26. Resolution Method vs. Resolution Time Mean Rank Table 27. Resolution Method vs. Resolution Time Kruskal-Wallis Test Results Table 28. Resolution Method vs. Resolution Time Correlation Test Results Table 29. Resolution Method vs. Customer Satisfaction Score Mean Ranks

119 120 121 121 122 123 123 124 125

Table 30. Resolution Method vs. Customer Satisfaction Kruskal-Wallis Test Results 126 Table 31. Resolution Method vs. Customer Satisfaction Survey Score Correlation Test Results 126 Table 32. Resolution Time vs. Customer Satisfaction Mean Ranks Table 33. Resolution Time vs. Customer Satisfaction Kruskal-Wallis Test Results Table 34. Resolution Time vs. Customer Satisfaction Directional Measures 128 129 129

Table 35. Resolution Time vs. Customer Satisfaction Score Correlation Test Results 130

vii

List of Figures Figure 1. Hierarchy of Knowledge Transfer Components Figure 2. Conceptual Framework Figure 3. Trouble Ticket Frequency Distribution by Year Figure 4. Resolution Method Frequency Distribution Figure 5. Resolution Time Frequency Distribution Figure 6. Program Year vs. Mean Resolution Time Figure 7. Resolution Time Normal Probability Plot Figure 8. Customer Satisfaction Survey Score Frequency Distribution Figure 9. Program Year vs. Customer Satisfaction Mean Scores Figure 10. Customer Satisfaction Normal Probability Plot Figure 11. Resolution Time vs. Customer Satisfaction Survey Score Mean Distribution 22 25 99 104 110 111 112 116 118 119 127

viii

CHAPTER 1. INTRODUCTION

Introduction to the Problem Centralized customer support is an established industry method used to improve revenue and profitability (Das, 2003; Negash, 2001). What remains unknown is whether this cost-saving commercial business practice can apply similarly to a unique, often isolated military environment. This research study statistically tests a theoretical framework for knowledge management and information technology that ultimately leads to knowledge transfer. To test the theory, the study uses the U.S. Navys Distance Support program, adopted in 2002 in an attempt to reduce excessive spending on labor for technicians and engineers by using advancements in knowledge management and information technology. In addition to the knowledge management and information technology elements, the study investigates the customer support domain, which is of increasing importance to military organizations. Over 800,000 active, reserve, and civilian personnel make up the U.S. Navy and Marine Corps team (Winter, 2008), and one of their primary missions is to maintain combat-ready naval forces capable of winning wars (U.S. Navy, n.d.). These combat forces need responsive support. When technical problems with critical combat systems are beyond the expertise of the sailors onboard, the U.S. Navy and Marine Corps have traditionally relied on the services of civilian field technicians and engineers stationed 1

strategically on ships and at bases around the world. For decades, the U.S. Navy spent a disproportionate amount of defense dollars on local technicians to support a multitude of various communication, computer, imagery, and intelligence systems, including radars, strike planning tools, transmitters, and fire control systems (Hanley, 2001). The technicians provided customer support to Navy and Marine Corps personnel at hundreds of locations at or near military installations. The U.S. Navy adopted the Distance Support program in 2002 in an attempt to reduce excessive spending by using advancements in knowledge management (KM) and information technology (IT). By taking advantage of centralized call centers with customer support technicians, the Navy contends it can drastically decrease the expensive local support on ships and bases while maintaining an appropriate level of service. The belief is that implementing IT and leveraging connectivity to provide technical customer support from the shore establishment whenever possible will more efficiently maintain Fleet readiness (Commander, Space and Naval Warfare Systems Command [SPAWAR], 2005). A critical objective of the Distance Support program is to assist sailors remotely by providing timely troubleshooting and repair information, which should reduce the number of required physical visits to ships and other Navy sites (SPAWAR, 2005). Increasing the use of centralized customer support providers likely reduces dependency on costly local and organic support, but results from formal research were not available to determine if the Navy had achieved this and other long-range objectives of the program. The Navy Distance Support (DS) policy is to re-engineer the support infrastructure to optimize the balance of organic and shore-based support requirements in operating units by moving support to centralized providers (Chief of Naval Operations, 2

2007). The study addressed whether the balance of support assets was optimal, and specifically, whether combat system support had been moving to centralized providers. Organizations in commercial industry have learned to achieve a competitive advantage through centralized customer support, which has resulted in the rapid growth of the field in the current business environment. Experts predicted the overall growth in employment for customer support representatives of 18% between the years 2008 and 2018 (Bureau of Labor Statistics, 2009). By providing front-line support, customer support centers shape much of the perception customers have about the organization (Hillmer, Hillmer, & McRoberts, 2004). Despite the frequent use of customer support services, understanding of how customers experience its use in their interactions with organizations is vague (Roos & Evardsson, 2008). Many studies have identified the need for analytical tools in logistics systems to measure customer support in specific settings and to quantify both customer satisfaction and the revenue produced by customer support (Goffin, 1999; Maltz & Maltz, 1998). For example, Goffin suggested research should determine if a relationship is present between a strong focus on customer support and higher market share. Market share in the military could translate to favored and repeated use of a particular resource or service. The intent of this study was to assess the performance of a military centralized customer support program against a backdrop of successful implementation in commercial industry. Generally, business literature has regarded customer support as a neglected area in need of research priority (Hull & Cox, 1994; Loomba, 1996). The Navys Distance Support program aims to transition traditional support infrastructure and business processes to the tools and technology of e-business and IT (Brandenburg, 2002). Using an IT foundation, 3

customer support programs can clearly demonstrate the value of KM because metrics are readily available to illustrate return on investment. Formal analysis of the DS program helped to verify the increase in the use of centralized customer support providers and determine if the program had reduced cost to the U.S. Navy. Because the DS program was a new way to facilitate timely technical assistance, education tools, and logistic support in the military, a formal research study was justified to answer unresolved questions of effectiveness.

Background of the Study Decades of research have shown that after-sales customer support has long been an important aspect of maintaining a stable customer base. A study by Tourniaire and Farrell (1997) demonstrated that customer support is an essential part of a satisfied customer. Lele and Sheth (1987) showed that customer support determines customer satisfaction, and the customer support quality was the deciding factor in customer satisfaction ratings. Many studies have evaluated the effectiveness of customer support systems in ensuring customer satisfaction (Armistead & Clark, 1992; Athaide, Meyers, & Wilemon, 1996; Cespedes, 1995; Christopher, Payne, & Ballantyne, 1991; Teresko, 1994). Researchers also have used multiple measures to evaluate the factors responsible for customer satisfaction (Berman, 2005; Khatibi, Ismail, & Thyagarajan, 2002; Robinson & Morley, 2006; Zemke, 2002). A study by Negash (2001) used information quality, system quality, and service quality as factors to demonstrate the effectiveness of web-based customer support systems. Pitt, Watson, and Kavan (1995) found that information quality and system quality had a significant impact on the effectiveness of a 4

web-based customer support system, while service quality had no statistically significant impact on these support systems. Knowledge management leveraged by IT has played a key role in improving customer support practices. During the 1990s, knowledge capture, sharing, and reuse became widespread and commonplace within customer support organizations in commercial industry (Davenport & Klahr, 1998). Customer support managers in industry were generally unaware that they were explicitly engaged in KM. Instead, their focus was on solving significant business problems; most significantly, the growing cost of customer support and the use of customer support as a key business factor in retaining and expanding market share. The background research on the use of IT showed it was also a major component in enhancing productivity of customer support (Das, 2003). Numerous studies have shown that organizations can create customer value with IT through improved customer service (Feeny, 2001; Ives & Mason, 1990; Ives & Vitale, 1988). Research has also demonstrated that the Internet has created an easily accessible and relatively affordable link between an organization and its customers, and has provided increasing opportunities to use IT for customer support (Feeny, 2001; Cenfetelli & Benbasat, 2002; Piccoli, Spalding, & Ives, 2001). Research by Singh (2008) found that in recent years, more companies have moved customer support services either completely or in part to the Internet, a trend that will likely rise in the future. In 1997, capitalizing on new advancements in KM and IT, the Naval Surface Warfare Center (NSWC), a component of the Naval Sea Systems Command, initiated a customer support proposal called the Sailor to Engineer program. The study became the 5

predecessor of the Navys Distance Support program. The ultimate goal of the Sailor to Engineer program was to improve the maintenance approach, testing procedures, and maintenance delivery for the Fleet (Hanley, 2001). The Navys Warfare Centers perform the primary research, development, test, and evaluation activities for submarine and ship combat systems. The Naval Surface Warfare Center is also responsible for developing maintenance procedures for Fleet systems used by logistics personnel for maintenance procedures published in technical manuals. By providing the latest validated technical information in the most cost-efficient way possible, NSWC demonstrated they could increase the productivity of Fleet maintenance (Hanley, 2001). In addition to call center customer support, the initiative collected maintenance information and procedures from engineers and posted them in a common repository on a website. The Sailor to Engineer program was successful, and evidence from the Fleet showed that the popularity of the program was primarily due of the trust and confidence sailors and technicians had in finding the necessary information in a reasonable amount of time (Hanley, 2001). Sailors also cited the ability to do self-help using the knowledge base available on the website, and the reliability of the help desk to forward callers to the right subject matter expert when needed (Hanley, 2001). The remarkable success of the Sailor to Engineer program resulted in the development of a Fleet-wide initiative, and the Navy DS program was born. Analysis of results from the Sailor to Engineer program led to creation of the Navys DS program, a greatly expanded effort to improve the efficiency of Fleet maintenance. Similar to the Sailor to Engineer program, the DS program instituted requirements for detailed Fleet support metrics from the programs inception. The DS 6

program grew into other areas of Fleet support to take advantage of industry best practices that used KM and IT. The program encompasses three distinct concepts: customer support, KM, and IT (Brandenburg, 2002). The customer support concept means helping those who request assistance. The KM concept is process related and means (a) understanding who helped whom at what time and (b) analyzing the metrics for future resource allocation. The IT management concept means using an assortment of web-based and IT tools to resolve issues without local or organic support. The current DS program, like the Sailor to Engineer program before it, relies upon a repeatable process of handling customer requests in a timely, efficient manner. This process had its genesis in a mutually agreed upon set of business rules governing program operation, and the rules apply to all categories of shipboard equipment except those related to Naval nuclear reactor plant systems (Naval Sea Systems Command [NAVSEA], 2009). The goal of the DS program is to have all support request data recorded in one, authoritative enterprise CRM system for generation of universal and near real-time, Navy-wide metrics (Naval Sea Systems Command [NAVSEA], 2009). By assessing the DS program using variables linked to the programs key enablers of customer support, KM, and IT, the intent of the current study was to bridge the gap between existing commercial customer support measurement tools and the unique military environment of warships at sea.

Statement of the Problem Advanced customer support practices and tools in industry have led to increased efficiency, lowered cost, and more satisfied customers. It remains unknown, however, if 7

the intervention of KM and IT has benefited customer support in a distinctly unique and challenging military environment. Customers operating in the remote and often isolated global environment of warships at sea characterize the setting for this study. Bartzak (2002) maintained that military leaders must effectively integrate KM and IT to achieve knowledge transfer, but indicated no military customer support program had demonstrated the theory. Using variables linked to the military programs key enablers of customer support, KM, and IT, the current study satisfied the difference of understanding between commercial industry and military practice by quantitatively assessing the U.S. Navys DS program. Before adopting the DS program, the U.S. Navy spent considerable funding supporting combat systems with local technicians that provided customer support to sailors and Marines worldwide. The Chief of Naval Operations (CNO) expressed his concern about the size and inefficiency of the Navys support infrastructure and directed its reengineering to support an optimally manned force (U.S. Fleet Forces Command, 2008). One of the main objectives of the DS program has been to remotely assist sailors by providing timely information to maintain, troubleshoot, and repair systems, which should reduce the number of required physical visits to ships and other military sites (SPAWAR, 2005). Industry has long demonstrated that the use of centralized customer support providers reduces dependency on costly local and organic support while maintaining the same level of service. Although capitalizing on advancements in KM and IT, similar long-range results of the Navys DS program have been unclear. Generally, the lack of quantitative research on the impact of KM and IT on competitive

advantage in government organizations remains an issue (Schulte & Sample, 2006). The findings of the current study added to understanding of this area of research.

Purpose of the Study The purpose of the study was to assess the intervention of KM and IT on competitive advantage within a military customer support program. The requirements of commercial and military KM initiatives may differ because the different objectives of military and commercial organizations require different philosophies, methodologies, and technologies to achieve success (Schulte & Sample, 2006). Many studies have agreed that technology is not the most important element of knowledge integration, but they also agreed that IT can enable knowledge integration and contribute to efficiencies in organizations (Barney & Ketchen, 2001; Nakata, Zhu, and Kraimer, 2008; Schulte & Sample, 2006). The current study had a theoretical framework based on an IT foundation, supporting an information management and KM layer, which then supported a knowledge capture layer, resulting in knowledge transfer in the form of customer support. Knowledge transfer to the customer is the goal of customer support centers using DS methods. Through this lens, the study quantitatively assessed a military customer support program. Theoretically, knowledge transfer is a higher-order concept than KM and requires knowledge capture, KM, and knowledge distribution through IT (Bartczak, 2002). These components need effectively integrated separate approaches and technologies to achieve the knowledge transfer necessary for learning organizations (Bartczak, 2002). In the practice of customer support, the Navys DS program management recognizes KM and 9

IT as the principal enablers for success of knowledge transfer at its customer support call centers. The current study was important to advance the theory that knowledge transfer has a hierarchal relationship with KM and IT as it applies in a military customer support environment. Assessing the DS program surfaced from the direction of the Chief of Naval Operations to more efficiently maintain the combat systems on ships and aircraft while maintaining or improving the level of service (U.S. Fleet Forces Command, 2003, p. 2). The study used variables that measured efficiency and level of service. Although many different variables could define level of service, studies show that customer satisfaction is one of the most useful measures for level of service (Berman, 2005; Khatibi et al., 2002; Parasuraman, Zeithaml, & Berry, 1988, 1991). For the purposes of this study, the customer defined the level of service, and the level was primarily exhibited in an overall measure of customer satisfaction. Navy initiatives such as the DS program should assess their progress in achieving their objectives on a regular basis to ensure the program is meeting its goals. Todays organizations are complex and dynamic, and even the best strategies and plans are no guarantee of success. Well-designed metrics, however, will produce the kind of insight to help military leaders understand and adjust to the trends only in-depth analysis can reveal. Instituting performance measures has become important to success in organizations in both public and private industry. Many federal laws, such as the Clinger-Cohen Information Technology Management Reform Act of 1996, explicitly require formal metrics plans to be in place at the start (Hanley, 2001).

10

According to Space and Naval Warfare Center strategic plans (SPAWAR, 2009), an objective of the new initiatives in customer support was to weigh both the tangible and intangible factors associated with customer satisfaction, customer loyalty, and customer advocacy. Determining the effectiveness of the Navys DS program contributes valuable lessons in this unique customer support environment, characterized by warships at sea operating with low network bandwidth and limited electronic communication channels. The research established a base of information for use by military leaders in developing, implementing, and tracking continuous improvement initiatives.

Rationale Many studies have outlined the need to better assess customer support effectiveness in particular environments (Davenport & Klahr, 1998; Goffin, 1999). The current study built upon the research of Bartczak (2002), Robinson and Morley (2006), Schulte and Sample (2006) and Nakata et al. (2008). Military organizations have a unique context requiring deployment of KM and IT to operate successfully (Bartczak, 2002). The study bridged the knowledge gap between customer support practices of industry and the practices of a military customer support program characterized by customers operating in the remote and often isolated global environment of warships at sea. Because of the unique cultural and structural attributes in the U.S. Navy, the organization must fully understand the managerial, resource, and environmental factors that influence customer support. The terms, constructs, and variables examined came from a clearly identified theoretical framework. This framework also represented the theory and knowledge providing the foundation of the study. 11

Quantitative analyses of customer support data have measured actual system performance and informed customer support process redesign; for example, in studies by Brown (2003) and Doomun and Jungum (2008). The results of quantitative analysis are suitable for estimating all the relevant parameters, and drawing conclusions from a thorough analysis of the model data. Doomun and Jungum (2008) maintained that one common difficulty is the lack of relationship between data stored at the individual trouble ticket level and the downstream, aggregate data, tracked by a customer support centers planning systems. Collection of high-quality data and subsequent in-depth statistical analysis is an important requirement for better understanding of customer support in different environments. Outside of the U.S. Department of Defense, industry has used DS methods for years to serve a wide variety of customers. The war on terrorism and its various levels and dimensions of threat to the United States has magnified and accelerated the need for more flexible, rapid, KM security strategies and systems (Bixler, 2002). The U.S. Navy has begun to use powerful systems to offer technical support, developing shared systems for collaboration at sea to solve the most difficult problems. Implementing the lessons of industry, the Navy has continued to build connectivity, develop the nodes, and facilitate flow of information. The tremendous utility of DS management technology in industry has been the implementation rationale for the Department of the Navy (DoN). A strategic objective of the Navys principal research fleet support agency is to transform customer data into meaningful information, which continually drives proactive improvement of the total customer experience (SPAWAR, 2009). An important goal of the DS program is to develop and sustain a holistic view of the Navy customers in terms 12

of contribution to warfighter knowledge superiority and customer CRM performance, and an end-to-end quantitative study of the program facilitates that effort. Also, Naval customers and managers have requested DS metrics for managing business intelligence to make managerial decisions affecting costs, products, processes, and Navy policy (Stahr, 2010). After years of data collection, this program assessment determined whether this customer support initiative had achieved the programs primary goals, and, at another level, determined whether successful commercial customer support concepts are applicable to a stringent worldwide military setting.

Research Questions and Hypotheses The management dilemma addressed in this study was whether the intervention of KM and IT had benefited customer support in a distinctly unique and challenging military environment. Using variables linked to the programs key enablers of customer support, KM, and IT, the study bridged the knowledge gap between customer support practices of industry and the practices of a military customer support program, characterized by customers operating in the remote and often isolated global environment of warships at sea. Previous studies have been important to show that leaders can optimize the customer support requirements in naval operating units by moving support to regional or centralized providers. The research results showed the Navy could maximize the effectiveness and efficiency of shore support and facilitate shore infrastructure reduction through KM and IT advancements. A scarcity of quantitative research on the impact of

13

KM and IT on competitive advantage in government organizations (Schulte & Sample, 2006) remains an issue. This study added to understanding of this area of research. To determine success of the DS program, the study analyses used variables measuring both efficiency and level of service. Singh (2008) posed that although efficiency is easier to measure than level of service, a sole focus on efficiency is problematic because it does not sufficiently capture service and may lead to insufficient understanding of service performance. Similarly, Mahesh and Kasturi (2006) observed that while organizations stress the need for customer satisfaction and have overall strategic intent to acquire and retain their customers through high-quality interfaces, the quality of interaction is often given lower priority than the efficiency of processing customer interactions at customer support call centers. Research Questions The study answers the following three groups of research questions: 1. Have KM and IT advancements in customer support improved efficiency in a restrictive, often isolated, offshore environment? Specifically, is there a relationship between the number of years of implementing distance support tools (program year) and the percentage of trouble tickets resolved using remote assistance methods, as measured by the Navys customer support database? Is there a relationship between the program year and trouble ticket resolution time? Is there a relationship between the program year and level of service, as measured by customer satisfaction surveys over an eight-year period? Is there a relationship between the support request resolution method and level of service? Specifically, is there a relationship between the method used to deliver the support and customer satisfaction, as measured by customer satisfaction surveys over an eight-year period? Is there a relationship between the method used to deliver the support and trouble ticket resolution time, as measured by the customer support database?

2.

14

3.

Is there a relationship between trouble ticket resolution time and perceived level of service, as measured by customer satisfaction surveys over an eight-year period?

Hypotheses Based on these research questions, the following hypotheses appear: H10: There is no significant relationship between the program year and resolution method. The program year is the independent variable and the dependent variable is resolution method (distant/local). H20: There is no significant relationship between the program year and trouble ticket resolution time. For this hypothesis, the independent variable is the program year and the dependent variable is resolution time. H30: There is no relationship between the program year and customer satisfaction. The independent variable is again the program year and the dependent variable is customer satisfaction. H40: There is no significant relationship between the resolution method and resolution time. The resolution method is the independent variable and the dependent variable is resolution time. H50: There is no significant relationship between the resolution method and customer satisfaction. The resolution method is again the independent variable and the dependent variable is customer satisfaction. H60: There is no significant relationship between the trouble ticket resolution time and customer satisfaction. The resolution time is the independent variable and the dependent variable is customer satisfaction. From these hypotheses, the following alternative hypotheses emerged: 15

H1a: There is a relationship between the program year and resolution method. H2a: There is a relationship between the program year and trouble ticket resolution time. H3a: There is a relationship between the program year and customer satisfaction. H4a: There is a relationship between the resolution method and resolution time. H5a: There is a relationship between the resolution method and customer satisfaction. H6a: There is a relationship between the trouble ticket resolution time and customer satisfaction.

Significance of the Study The goal of the DS program is to reduce dependence on costly local and organic support by increasing the use of centralized customer support providers. The research results determined whether the military could maximize the efficiency of customer support through knowledge transfer from a foundation of KM and IT. For years, customer support personnel have assumed they are resolving increasing numbers of issues through DS methods such as the telephone and e-mail. Repairing a problem with information or remote diagnostic services is much less expensive than physically visiting a site to repair a problem. The U.S. Navy has attempted to use the same cost-saving KM and IT practices as industry uses, but whether the practices had achieved the desired effect remained unknown. The study added to the existing body of knowledge in the field of customer support by introducing a valid, reliable, and objective method on how to assess customer support efficiency and level of service in this unique military 16

environment. Addressing the knowledge gap in customer support, the study resolved management questions about whether the military service could implement the same proven practices as industry and achieve the same result. The Chief of Naval Operations directed the Navy to build the right Fleet, deploy the right aircraft, and maintain the right shore infrastructure required to support them (CNO, 2007). This direction also appeared in the Navys 2006 Business Transformation Priorities, which called for the development and maintenance of a secure, seamless, interoperable Information Management/Information Infrastructure (CNO, 2007, p. 1). Remotely supporting complex electronic systems should help reduce the knowledge and skills required to operate and maintain equipment, and should improve the quality, quantity, and compatibility of collected field data. Distance support also increases the U.S. Navy's capability to collect and process all relevant reliability, maintainability, availability, logistics, and readiness data required to implement cost-effective sustainability strategies. An assessment of available information to determine whether such measures have occurred is valuable to the most senior naval leadership. A comprehensive, accurate assessment of the DS program years after its implementation may help Navy leaders decide on how to continue technical customer support for sailors and Marines around the world. The study also demonstrated that the military has circumvented or overcome KM and IT integration barriers identified in previous studies.

Definition of Terms Definitions of key terms used in this study follow. Several are unique to the environment of the U.S. Navy, while others have multiple meanings, depending on the 17

context in which they appear. The following are key terms and abbreviations that appear frequently in this study. An understanding of the intended meaning in the context of this study and of the environment providing the context greatly assists in understanding specific portions of the study and the study as a whole. Call center. A component of an organization that executes customer care by telephone (Grebner et al., 2003). Call centers typically conduct initial call recording, but not necessarily call resolution. Customer. Navy units and their Chains-of-Command that demand distance support products and services to attain and sustain a capable state of mission readiness (Brandenburg, 2010). Customer relationship management (CRM). An information industry term for methodologies, software, and usually Internet capabilities that help an enterprise manage customer relationships in an organized way. Customer satisfaction. A post-service evaluative judgment of a service encounter resulting in a pleasurable end-state, based on a combined assessment of the performance of service factors that constituted that service (Oliver, 1980). Customer support representative (CSR). Customer support representatives are those assigned to interact with customers to provide technical repairs and answers to inquiries involving products or services. Distance support (DS). A combination of process and technology that provides the transfer of data, information, and knowledge at the right time, to the right people, at the right place (SPAWAR, 2005).

18

Global Distance Support Center (GDSC). The call center designated by the DoN as the single point of contact for 24/7/365 Fleet support, accessible by all military, their family members, and all government and contractor employees. Help desk calls. Requests for assistance made to technical support personnel, whether by telephone, e-mail, in-person, website, or other means. Fleet. The Fleet is the operational units of the U.S. Navy and Marine Corps. Information technology (IT). Information technology includes not only tangible hardware, but software, automated processes, communications, and physical infrastructure that connect or support information systems. Information management (IM). All the processes used to acquire, manipulate, direct, and control information. IM includes all processes involved in the creation, collection and control, dissemination, storage, retrieval, protection, and destruction of information (Air, Land, Sea Application Center [ALSA], 2003, p. I-1). Knowledge management (KM). The capacity or processes within an organization to maintain or improve organizational performance based on experience and knowledge, encompassing the aspects of content, process, culture, and infrastructure (Pan & Scarborough, 1999). Portal. A website that is or proposes to be a major starting site for users when they connect to the Web, or a website users tend to visit as an anchor site (Hanley, 2001). There are general portals and specialized or niche portals. Some general portals include Google and Yahoo. Specific examples of portals for the U.S. Navy include Navy.mil for general information and news, Anchordesk.navy.mil for support, and Homeport.navy.mil for the Navy/Marine Corps Intranet (NMCI) network. 19

Assumptions and Limitations Assumptions in the study included that users completed trouble tickets for all problem resolutions using DS. Analyzing trouble tickets is not a valid measure if technicians do not use trouble tickets for all projects considered in the study or if they use trouble tickets but do not enter them in the customer support database. The study design was a secondary data analysis research design, which linked data from large complex databases. The researcher had to make assumptions about what data to combine and which variables to select in the customer support database. Generally, secondary data analysis uses data collected by others, so problems that occurred in the original data collection are unknown (Trochim, 2006). A limitation regarding the population of the study was that it addressed mainly combat system trouble tickets, and not issues concerning nuclear power engineering, weapons innovations, ship construction, or other facets of the military support structure. Many organizations, however, contributed to the Navys Global Distance Support Center (GDSC) network, including the Space and Naval Warfare Center, the Naval Sea Systems Command, and the Naval Surface Warfare Center, all with different divisions contributing to the trouble ticket database from around the world. The sampling error and population showed that the statistics were representative of DS statistics throughout the Fleet. A limitation to reliability in this study was the ability to measure customer support trouble tickets that reflected the correct source of support, either organic/local support or distant support from a centralized call center. The ability to measure this information correctly depended on whether the customer support representative (CSR) 20

completed the trouble ticket accurately. If the CSRs only sometimes completed the trouble tickets correctly, then the measure was not reliable. Too many errors in completing the ticket would cast doubts on the ability to consistently measure the variable in the same way.

Nature of the Study Theoretical Framework Theory is an interrelated set of constructs formed into propositions or hypotheses that specify the relationship among variables, typically in terms of magnitude or directions (Creswell, 2009). A theory provides a basic approach to understanding the research. The theoretical framework used during this research was a KM component hierarchy presented by Bartczak (2002). The framework originated from an IT foundation, supporting an information management (IM) and KM layer. This IM/KM layer then supported a knowledge capture layer. In this theoretical framework, IT provided the hardware, software, and communications that made automated information management possible. The IM/KM layer in the component hierarchy consisted of the processes and procedures that enabled IT management and effective use. The IM/KM level focus was largely on controlling information to ensure it was usable and reliable. Above this level was the knowledge capture level. The knowledge capture level of this framework transformed knowledge into information.

21

(KC) Knowledge Capture Generates Information (IM/KM) Information/Knowledge Management Provides Organization and Control (IT) Information Technology Infrastructure Provides Data Storage, Access and Communication KT = f(IT + (IM/KM) + KC) Figure 1. Hierarchy of knowledge transfer components. Adapted from Identifying Barriers to Knowledge Management in the United States Military, 2002, by S. E. Bartczak. Doctoral dissertation, Auburn University. Copyright 2002 by S. E. Bartczak. Bartczak (2002) classified the framework as hierarchical because she concluded that knowledge capture is a higher order phenomenon than KM, and that knowledge transfer is the ultimate goal of KM. In customer support practice, knowledge transfer occurs when customer support call centers resolve a customer support request using DS methods. Bartczak's hierarchy of knowledge transfer components is appropriate as a lens through which to consider KM. The current study determined that variables relating to this hierarchical relationship could demonstrate the knowledge transfer required for successful customer support practices.

22

Conceptual Framework A conceptual framework identifies how variables, theory, and the research interact. Three key theories from the main literature domains in the study justified the choice of the dependent (outcome) variables. The main literature domains were customer support, KM, and IT, all of which were the primary enablers of the DS program. Previous studies within these three domains have shown the Navy can maximize the effectiveness and efficiency of shore support and facilitate shore infrastructure reduction through KM and technology, while at the same time maintaining the level of service. Concepts of customer support within several research studies have shown that good customer support is essential for achieving customer satisfaction and good longterm relationships (Armistead & Clark, 1992; Athaide et al., 1996; Cespedes, 1995; Christopher et al., 1991; Davidow, 1986; Lele & Sheth, 1987; Teresko, 1994). Customer support directly contributes to competitive advantage (Armistead & Clark, 1992; Davidow, 1986; Goffin, 1998; Hull & Cox, 1994). Customer support increases the success rate of new products and systems and can be a major source of profit for manufacturers (Berg & Loeb, 1990; Hull & Cox, 1994; Knecht, Lezinski, & Weber, 1993) which translates to greater efficiencies for government-funded organizations. In Navy customer support, KM is a key enabler of the DS program. Customer support strategies have increasingly come to rely on knowledge bases as an integral part of the customer support call center, and they are a primary focus of customer relationship management. Studies agree that KM contributes to efficiencies and other factors of competitive advantage in organizations. Research has identified the knowledge

23

requirements of customer support work and the ways in which productivity relates to knowledge in this demanding activity (Das, 2003). The current research determined if the recent intervention of KM and IT in customer support has been beneficial to a military service characterized by geographic isolation, low computer connection bandwidth, and limited electronic communication channels. The Chief of Naval Operations direction for the DS program is to more efficiently maintain the naval forces while maintaining or improving the level of service (U.S. Fleet Forces Command, 2003). Previous studies in commercial industry have been important to suggest that moving support to regional or centralized providers could optimize the organic and shore-based support requirements in naval operating units. To quantitatively determine if the DS program has been successful in achieving its goals, the customer support database of trouble tickets and customer satisfaction surveys underwent analysis from the programs inception until the end of 2009, covering an eight-year period. During this time, technicians recorded over 2.5 million trouble tickets, and customers returned thousands of customer satisfaction surveys on those trouble tickets. Variables chosen from these two databases assessed the program from the perspective of a KM and IT intervention. Figure 2 illustrates the conceptual framework and how each of the four key variables related to the six research questions.

24

Conceptual Framework

Customer Support KM/IT Intervention

Q3 Q2 Customer Satisfaction Q6 Resolution Time Q4 Q5 Resolution Method (Distant/Local) Q1

Figure 2. Conceptual framework

The direction of the Chief of Naval Operations to more efficiently maintain the combat systems on ships and aircraft while maintaining or improving the level of service (U.S. Fleet Forces Command, 2003, p. 2) defined the Navys DS program goals. Many variables within the customer support database assisted in determining if combat systems were more efficiently maintained, including support response time, down time, and failure rate. The variables used in the study were those that could numerically measure efficiency and level of service best over an eight-year period.

25

For this study, the dependent variables chosen from within the customer support database were the method used to resolve technical assistance requests from Fleet customers and the time to resolve the trouble ticket. Concerning the resolution method, the three possible values of this variable most clearly illustrated if combat systems were more efficiently maintained from the intervention of KM tools enabled by IT. Without the intervention of KM and IT, the technical issue would not be resolved using remote (distant) customer support, but rather by a field technician. The call resolution time clearly demonstrated organizational efficiency, because less labor is necessary to resolve the support request. The labor cost of CSRs is the single largest component of the cost of providing customer support services (Das, 2003). Since the inception of the DS program, the resolution method is a required selection on every trouble ticket recorded in the Navy. The CSR completes this field on an electronic, web-based, trouble ticket form to demonstrate how far the ticket escalates. The values for this variable are on site, selected if an on-site field technician received the problem; transition, selected if remote help could not resolve the problem and the ticket was escalated to a technician requiring a visit to the ship or site; and distance support, selected if technicians resolved the problem without visiting the site. These three possible values best demonstrated if Fleet maintenance had moved towards centralized call centers, and therefore, more efficiently maintained from the perspective of KM and IT. A sole focus on efficiency, however, does not necessarily produce a desirable interaction from the viewpoints of either the customer or the organization (Robinson & Morley, 2006), so level of service was another variable analyzed. Many variables within 26

the Navys customer support database were available for analysis to determine if customer support practices had maintained or improved the level of service, including satisfaction survey scores, resolution time, and total actions per trouble ticket. Commercial customer support call centers have various measures of customer satisfaction and loyalty (CommWeb, 2003). Common measures of customer satisfaction include call abandon rates, average speed of answer, percentage of blocked calls, satisfaction surveys, number of times technicians transferred the call, accuracy of response, and first-call resolution (Broetzmann & Grainer, 2005; Monger & Keen, 2004; Saxby, 2005; Schwartz, Ruffins & Petouhoff, 2007). For this study, however, the dependent variables chosen to determine level of service were the overall customer satisfaction, determined by a customer satisfaction survey tied to each closed trouble ticket, and the trouble ticket resolution time. Unlike the resolution method, the customer determined customer satisfaction. Due to response rates, satisfaction surveys are not available for every trouble ticket, but a sufficient population was available for valid and reliable analysis. The reason for choosing this particular variable was that the five possible values most clearly demonstrated if level of service was decreasing, stable, or improving. Organizations must determine what elements of service are important to customers by analyzing customer complaints and gathering customer satisfaction data from customer surveys to provide the level of service quality that meets or exceeds customer expectations (Berman, 2005; Jones & Sasser, 1995). The satisfaction surveys used in the study had a ranking for overall satisfaction for customer support service on the closed trouble ticket. The values

27

for this variable ranged from strongly agree to strongly disagree. The variable can best determine if the customer support service level is decreasing or increasing.

Organization of the Remainder of the Study Chapter 2, the literature review, has three major sections. The three sections review academic literature in the primary areas of consideration: customer support, knowledge management, and information technology. The literature review addresses these three topics because they form the conceptual foundation of the DS program (Brandenburg, 2002). The customer support or customer relationship concept means assisting those in need, while the KM concept means analyzing the metrics collected for future resource allocation. In the context of DS, the IT concept is about using IT and web-based tools to resolve issues without local or organic support. Chapter 3, Methodology, addresses the research methods chosen for the study. It includes a description of the processes used for testing the database, as well as the plan for collecting data. This chapter provides detailed descriptions of the trouble ticket entry form, customer satisfaction survey, and support site information. Chapter 3 also includes description of the analysis phases, including coding and analyzing the quantitative data. Chapter 4, Results, presents the data collected and an exploration and analysis of the data in the order of the stated hypotheses. Tables and graphs present the findings of statistical tests. Chapter 5, titled Discussions, Implications, and Recommendations, identifies and describes suggestions and conclusions derived from synthesis of the literature review, conclusions resulting from analysis of the collected data, and integration of the literature review synthesis with conclusions derived from the research 28

data analysis. Chapter 5 includes recommendations for future research and a summary of the entire research project.

29

CHAPTER 2. LITERATURE REVIEW

Introduction to the Literature Review A review of relevant literature presents insight into the three primary areas of consideration: customer support, KM, and IT. This section shows how the relevant studies and theories in customer support practice are the result of refinements in previous research, with regrouping of individual studies into a logical pattern to form the basis for the current study within the U.S. Navy. This section contains summaries of significant studies and establishes connections among concepts developed from different disciplinary orientations. The diverse literature helps explain the nature of technical support problems and their resolution using knowledge transfer within an organization with KM and the IT resources available in the customer support environment.

Customer Support With the growing success of customer support in commercial business practices, the U.S. Navy instituted DS policies in 2002, to more efficiently maintain Fleet readiness by implementing IT and leveraging connectivity to provide technical support from the shore establishment whenever possible (SPAWAR, 2005, p. 1). Attempts at increasing the efficiency of customer support is not a new idea. Customer support carries huge costs, both in people, in the form of salaries and training, and in infrastructure, in 30

the form of computer networks, customer databases, and office facilities. Because customer support has traditionally been a support rather than a profit-making function, companies exert tremendous pressure to keep these costs down while acquiring sufficient support capabilities to satisfy customers (Davenport & Klahr, 1998). This section of chapter 2 highlights the escalation and importance of customer support and customer satisfaction interest in different industries, and explains why the Navys customer support program has such high visibility in todays environment. Customer support is a growing industry, employing approximately 2.3 million people in 2008, which ranked it among the largest occupations in the United States (Bureau of Labor Statistics, 2009). Research in the customer support field has focused on improving customer satisfaction, developing better training methods for CSRs, cost reduction for the companies, and nurturing a loyal customer base (Singh, 2008). For much of the research, quantitative studies with a positivist theoretical perspective were the choice to determine if support costs were excessive, compared to manufacturing costs. Many of the positivist approaches of recent research have been quantitative studies, using survey research and statistical analysis of theories developed by the seminal works. The more recent studies tested how customer support practices affect different industries. Early research on customer support demonstrated that identification of customer expectations regarding product support and the development of cost-effective strategies for meeting those expectations was a major factor of organizational success. For example, one early project used a case study method of three companies that produced agricultural equipment (Lele & Karmarkar, 1983). The research found that product 31

support encompassed everything that could help maximize the customer's after-sales satisfaction, including parts, service, and warranty, plus operator training, maintenance training, parts delivery, reliability engineering, serviceability engineering, and even product design. The study was a mixed-methods approach using qualitative and quantitative measures, including statistical analysis. In developing a support strategy, the studys key findings were that managers must make trade-offs between effectiveness and cost. The research study results showed such trade-offs are often quite complicated and must be evaluated carefully. The research by Lele and Karmarkar (1983) is among the earliest to show that defining these expectations of customer support and meeting them effectively can be critical to business success. Lele and Sheth (1987) showed later that customer support determines customer satisfaction, and the customer support quality was the deciding factor in customer satisfaction ratings. The overall satisfaction rating related even more closely to after-sales factors such as parts availability and resale value than to purchase price and equipment design. Many of the early studies on customer support collected data from different case studies and highlighted the key points. Grouping the key points into similar concepts formed concept categories, which became the basis for the creation of a theory. The methodological approaches or paradigms used in the early research and the underlying philosophic orientations influenced subsequent research and theory by grouping similar concepts for future studies and defining expectations in the field. For example, studies have demonstrated after-sales support to be crucial for the purpose of both creating profit and gaining feedback from consumers about the quality of 32

the product (Armistead & Clark, 1991). If researchers treat the two activities of manufacture and support in isolation, without recognizing the interaction between them, achievement of an objective of overall customer satisfaction is unlikely. Most significantly, research has found the key factors for success appear to be customer uptime and customer care. Three elements influence the factors: design, manufacture, and support (Armistead & Clark, 1991). In the study by Armistead and Clark, the authors used a philosophy of total quality management to link the three areas of design, manufacture, and support, and the three factors appeared in subsequent research studies as factors to help decide on the best routes for success. A constructionist epistemology analyzed case studies to build a credible mental model and theory about the way a company should evaluate the role customer support plays in its overall competitive positioning. Initial studies on customer support demonstrated that the most significant factor for quality service was rapid and dependable resolution, due primarily to the high failure rates of products (Lele & Karmarker, 1983). New technologies, such as the systems supported by the Navys DS program, have led to more reliable but more complex products. These products often have many software-based functions. As a result, the range of customer support has increased to place more importance on basics such as user training and online support (Goffin, 1998). Even the term customer support reflects the increase in other focus areas. In the past, the commonly used term was customer service, but the term did not fully describe the modern industry significance (Clark, 1988). The literature shows that as customer support grew in importance and became

33

more visible in the organization, businesses developed a more professional approach than they had in the past (Kneckt et al., 1993). Perhaps Robinson and Morley (2006) provided the most relevant academic literature on customer support as it relates to the approach in the current study. The Robinson and Morley study served as a primary source for the current research. The authors investigated customer support call center management from a management perspective, particularly in determining the key management responsibilities for managing call centers and the key performance indicators used to manage centralized call centers. The design and methodology used a survey of customer support call center managers, followed by in-depth interviews. The authors primary findings were that confusion exists regarding the strategic intent of customer support. Organizations primarily use customer support call centers as a method to reduce costs, with customer support delivery a secondary consideration (Robinson & Morley, 2006). Customer support managers, however, declared customer support as their main management responsibility. Robinson and Morley used both quantitative and qualitative metrics in the research. The practical implications of the study demonstrated that managers concentrated on the call itself rather than the outcome of the call from the perspective of either the customer or the organization. Some quantitative measures served as substitutes for customer satisfaction, but managers used achieving relevant key performance indicators a goal in its own right. Robinson and Morley (2006) emphasized that organizations have an insatiable appetite for quantitative performance measures, despite their limitations, almost to the exclusion of all other performance measures. The implication of the results for customer support 34

managers and other leadership in the field is that managers could better manage customer support if organizations used a wider range of means and measures. Robinson and Morley (2006) are influential because the findings show that measuring customer satisfaction is just as important as measuring efficiency, and a sole focus on efficiency does not necessarily produce a desirable interaction from a customers viewpoint or even from the organizations viewpoint. The findings supported the current studys balanced approach to using key performance indicators from both the Navys customer support database and the Navys customer satisfaction survey database as quantifiable variables for overall efficiency and level of service. Relating to this studys theoretical framework, knowledge transfer in customer support is more than just measuring efficiency, and knowledge transfer is more likely to occur in an environment that fosters quality level of service. Research provides considerable measurable evidence on the significance of customer support in different industries. The evidence from different studies offered advantages of customer support, both in the role it plays in achieving customer satisfaction and in the revenue it generates. Case study methodology was used in Goffins (1999) study to investigate the distribution channels for customer support used by companies in five different industries. By investigating widely different industries, the research demonstrated for the first time that customer support is important in sectors other than computing. In addition, the case studies illustrated the variety of distribution channels available for customer support. Goffin (1999) concluded four main points from the study. First, customer support appeared to be important in industries where equipment is complex and therefore difficult 35

to install or to learn to use; where breakdowns occur relatively frequently or have serious financial or other consequences for the owner; and where cost-of-ownership is significant. Second, the nature of the product and market characteristics, such as user skill levels, largely determines the key elements of customer support. Third, companies need to select the correct customer support distribution channels to determine the best combination to meet their needs and customers' requirements. Fourth, in determining a customer support strategy, companies need to consider the main product or service issues that will give them competitive advantage. Customer support for technically complex products has grown so much that many organizations now have the view that an unsupported product is hardly considered a product at all (Pentland, 1995, p. 51). For many businesses, technical customer support is not only a competitive necessity, but also a potential source of revenue in markets where profits from product sales are increasingly restricted by price competition (Das, 2003). A productive customer support function can improve the sales revenue as well as the profitability of a firm. In addition to its business significance, customer support also represents a unique form of knowledge work. The outputs of technical support are data, advice, plans, and diagnoses, all of which customers value for their information content (Das, 2003). As a form of work, customer support is one where the routine of work is made up of the emergencies of other people (Hughes, 1971, p. 316), which makes the work non-routine and time critical. Technical support, and the technicians who provide it, also represent a new form of work in modern organizations, a form not well explained by traditional roles of laborers and professionals (Barley 1996). Simon (1981) found that the taxing 36

environment within which technical support often takes place provides a unique opportunity to learn about factors limiting the productivity of this kind of work. Customer service representatives provide a valuable link between customers and the companies who produce the products they buy and the services they use (Bureau of Labor Statistics, 2009). Research by Spencer-Matthews and Lawley (2006) advanced theory about this crucial customer support link. Their study identified the need for firms to consider customer contact management as an avenue for differentiation and competitive advantage as well as for providing guidelines for the successful implementation of customer support. The purpose of the Spencer-Matthews and Lawley study was to better understand the issues of why firms should incorporate individualized communications into customized customer support service and how firms should implement customer support management. The Navys goal is to advance a centralized call center enterprise for all its ship and aircraft maintenance activity. Regional maintenance centers serve as the primary source of maintenance support and should be the first point of contact for direct technical assistance for shipboard systems and equipment (U.S. Fleet Forces Command, 2003). Much of the customer support literature, however, has demonstrated that because technical support call centers traditionally concentrate on numerical quotas for efficiency and speed, their environment is incompatible with job aspects found important to support growth, accomplishment, professional development, and the opportunity to teach others (Nelson, Nadkarni, Narayanan, & Chods, 2000; Spreitzer, Cohen, & Ledford, 1999; Wallace, Eagleson, & Waldersee, 2000). Because of the focus on numerical results, the traditional customer support call center is a difficult environment in which to spend 37

sufficient time to conduct thorough problem research, and it runs counter to the development of technical skills that materialize from proper research efforts. Studies have found that these factors can negatively affect job satisfaction (Knapp, 1999; Nelson et al., 2000; Spreitzer et al., 1999). Despite these shortcomings, the commander of U.S. Fleet Forces Command reiterated the Chief of Naval Operations intent to drive maintenance toward a centralized customer support call center enterprise shortly after initiation of the DS program. The directive stated that regional maintenance centers would serve as the primary source of maintenance support and would be the first point of contact for direct technical assistance for shipboard systems and equipment (U.S. Fleet Forces Command, 2003). The message also underscored the importance of ships developing and exercising self-sufficiency for shipboard system maintenance to the greatest extent possible. The purpose of the directive was to ensure that ships understood the new policy of first contacting the Navy integrated customer support call center, which would route the requests to a proper repair activity center. Studies showed that successful organizations see their customer support personnel as valuable and reliable resources, equal in importance to customers to quickly and efficiently discover how the organization is performing in the marketplace (Zemke, 2002). Successful companies examine satisfaction internally and externally from the organization by auditing employees to find obstacles to maintaining and enhancing the companys reputation. Also, managers can determine from the results if the employees have the same view that the organization does, which is important to long-term success. One significant study on satisfaction (Zemke, 2002) used over 70,000 cases in an 38

employee and manager inventory and associated database. The research focused on job satisfaction issues and employee and manager perceptions of how well an organization is serving and satisfying customers. The study concluded that although a good business model is critical, as well as having the right resources, ultimately the customer support personnel become the most powerful success factor. Five basic principles surfaced from the research data (Zemke, 2002): 1. 2. 3. 4. 5. Finding the right customer support employees is the critical first step. Train and develop CSRs as if they are permanent to the organization. Success depends on commitment to high standards that everyone understands and embraces. Fair compensation practices can help yield quality service and retain quality CSRs. Celebrating customer service successes helps connect employees to customers.

Customer support is about managing the solving of problems, and organizations need to better focus on that aspect when designing customer support policy. For example, Zemke (2000) observed that manufacturing processes often have welldocumented multiple levels of quality control to find a product that does not meet specifications. Manufacturing processes certainly have business policy in place to repair systems that become inoperable. In the service segment of business, however, rarely does a similar recognition exist that the system will not always function as designed. Even though research has shown that the customer causes about 30% of all business problems, organizations must have documented service recovery processes to manage the experience of those customers (Zemke, 2000). 39

Within the U.S. Navy, many argue that the most beneficial accomplishments of the Navys DS program are improved customer support practices. These benefits emerged primarily from enabling process improvement and re-engineering the support infrastructure. According to Zemke (2000), customer support recovery processes can both educate customers on how not to repeat the mistake and ensure that the customers remain satisfied about the experience. Remaining satisfied by the experience is a significant consideration for customer support. Byrnes (2005) maintained the customer experience does not measure customer support, but rather what the customer remembers about the experience and how that perception drives the customers future behavior are the important elements. For commercial organizations, translating that customer experience into customer satisfaction is likely to result in better customer retention, favorable word of mouth, or increased purchases of the product or service (Keiningham, Goddard, Vavra, & Iaci, 1999). To assess the level of service, the DS program initiated a customer satisfaction survey that customers receive automatically via e-mail after closure of a trouble ticket. Studies show that customer satisfaction is a useful measure for level of service (Berman, 2005; Berry, 1991; Khatibi et al., 2002). Building customer satisfaction is much easier to accomplish when the company can resolve problems in a timely manner (Byrnes, 2005; Meltzer, 2001; Newell, 2000; Read, 2002). Research by Feinberg, Kim, Hokama, de Ruyter, and Keen (2000) indicated that first-call resolution was an essential element of the call center operation, but according to Blanding (1991), speed of resolution had a greater effect on how a customer perceived the organization than did the quality of the resolution. The current study used both customer satisfaction and speed of resolution to 40

measure level of service. Other studies have focused on the attitude, competence, and response by the individual call center representative as the major influences on customer perception (Bitner, Booms, & Mohr, 1994). Many studies have attempted to quantify customer satisfaction by comparing the level of service received against the level of service expected. Some studies developed and then later refined a framework for determining customer satisfaction (Parasuraman, Zeithaml, & Berry, 1985; Zeithaml, 1988; Berry, 1991). The framework, known as the SERVQUAL model, consisted of a 22-item survey instrument designed to measure customer perception of five dimensions of service quality: (a) tangibles: appearance of physical facilities, equipment, and staff; (b) reliability: ability to provide dependable and accurate service; (c) responsiveness: willingness to help the customer; (d) assurance: ability to inspire confidence and trust; and (e) empathy: extent of caring and individualized service. Berman (2005) verified the popularity of using the SERVQUAL model to measure customer satisfaction. Khatibi et al. (2002) acknowledged the model as the most comprehensive and frequently cited tool for measuring and managing customer satisfaction. Bitner, Booms, and Tetreault (1990) observed that the majority of the service quality items from this survey relate directly to the human interaction between the CSR and customers. Customer satisfaction, therefore, may be highly dependent upon the CSRs performance when providing customer support (Khatibi et al., 2002). To provide the level of service quality that meets or exceeds customer expectations, organizations should determine what elements of service are important to customers by analyzing customer complaints and gathering customer satisfaction data 41

from customer surveys (Berman, 2005; Jones & Sasser, 1995). The Navy DS program was aggressive about ensuring the hearing of customer feedback, and instituted a webbased survey that customers receive automatically via e-mail after closure of a trouble ticket. Although it does not use the SERVQUAL model, the DS program uses a similar customer satisfaction survey that focuses on the CSRs performance: specifically, on the interaction between the CSR and the military customers. The study analyzed the scores on these surveys for a primary determination of level of service.

Knowledge Management This section examines the use of KM with customer support practices to improve competitive advantage. The Navys DS policy is dependent on KM, which has been fundamental to customer support (Brandenburg, 2010). The DoN recognizes that KM technologies help improve performance through increased effectiveness, productivity, quality and innovation (CNO, 2007). From inception, KM has been a key enabler of the Navys DS program (Brandenburg, 2001). World events have influenced the U.S. Navys increased use of KM tools. Since the events of 9/11, DoN leadership has transformed its management philosophy, principles, and practice (Chatzkel, 2002). This has followed the change in the overall global geopolitical landscape and the commonly held belief that the role of the U.S. military in the world has evolved, now requiring a different set of demands and responses. These new parameters have led to the overall strategy of knowledge superiority to defend and protect the United States and its vital interests. A critical component of the overall knowledge superiority strategy of the Department of Defense 42

(DoD) has included KM and the impact on competitive advantage dimensions (CNO, 2007)). Although KM research has been extensive, little research has surfaced about KM in the military (Bartczak, 2002). Schulte and Sample (2006) provided the most relevant literature on the KM relationship to customer support for the approach of the current study, specifically in understanding the differences between military and commercial practices. Schulte and Sample contributed quantitative research to the theoretical concept that KM technologies can enable knowledge integration and contribute to efficiencies in organizations. The Schulte and Sample (2006) study is important to the domain of KM as it relates to customer support in military organizations, and served as a primary source for the current research. The intent of the Schulte and Sample study was to investigate the perceived differences of the impact of KM among commercial, government, and military users. The methodology used analysis of variance to compare means of responses among the three groups. The study analysis applied hypothesis testing through a case study of the Navys Space and Naval Warfare Systems Command (SPAWAR), which is a national enterprise in the military sector. The primary findings of the study indicated a significant difference in expectations among contractors, government civilian, and uniformed military knowledge workers on expected efficiencies from KM technology. The research results concluded that contractors and uniform military had similar expectations, but government civilians had significantly lower expectations. Contractors had the highest expectations from KM. In relation to the current study, Schulte and Sample (2006) demonstrated that significant barriers existed to implementing KM 43

solutions in a military environment, which may have an effect on a customer support program that relies on KM and IT for success. Similar KM barriers may not exist in a commercial environment. The limited number of studies comparing government and commercial sector KM expectations and practices bound the Schulte and Sample (2006) research. The authors used case studies and interview data provided by vendors and consultants, who revealed the value of KM technology. Further research is necessary to increase the body of knowledge on the impact of KM and IT on efficiency and competitive advantage in government organizations. Relating to knowledge management, studies have found three important lessons for business managers from customer support initiatives (Davenport & Klahr, 1998; Hanley, 2001). First, customer support clearly demonstrates the value of KM. Metrics and measurements are readily available to illustrate return on investment. In the Navys example, the DS program has collected millions of trouble tickets over an eight-year period that the Navy can use to measure and compare dozens of variables relating to customer support. Second, the initiatives provided excellent case studies for fully implemented KM processes to effectively capture and reuse knowledge. Third, for those companies involved in corporate-wide KM activities with longer-term goals and fewer measurable benefits, the results suggested looking for more localized KM efforts to the value of knowledge reuse in the short term. The localized efforts can then pave the way to more enterprise-wide knowledge sharing activities. Although not a new concept, effective KM has become a primary path to achieving real value from IT investment (Carr, 2003). IT is the acknowledged enabler of 44

KM, but it is not the true source of value (Koenig, 1999). The difference is important, because it may influence the course of future development in IT and technology integration in customer support processes. Organizations that attempted to introduce KM systems based entirely on technology solutions experienced a high rate of failure, demonstrating that technology is an essential enabler of KM, but not the entire solution in itself (Bassi, 1997). Knowledge bases, especially web-based solutions, also have become important to an organizations profit in the commercial sector. Web-based customer care has created a new avenue of savings in customer support. For example, the average call to a help desk can cost as much as $27, but using an online self-help knowledge base can cost 75% less (Gartner Group, 2000). The attraction of improved customer support combined with dramatic cost reductions has motivated many companies to invest in knowledge base technologies. As investments in knowledge base systems continue to grow, increased focus should be on measuring performance and the amount of value provided to the organization. For years, the DoN has been a leader in practicing KM. The service has defined KM as a process for optimizing the effective application of intellectual capital to achieve organizational objectives (Schulte & Sample, 2006). Knowledge management depends on intellectual capital, which includes human capital, social capital, and corporate capital. The DoN chief information officer developed the knowledge-centric organization framework to assist Navy and Marine organizations in supporting the implementation of KM within their organizations (Bennett, 2000). The knowledge-centric organization has

45

five dimensions: technology, process, content, culture, and learning. These dimensions should be central, therefore, when integrating KM and IT in a customer support program. For customer support call centers, the intention of knowledge bases is to reduce labor costs and to increase call center efficiency (Brown, 2003). Other objectives for knowledge bases make them more than just an efficiency factor. Studies have shown that knowledge bases provide a range of non-financial benefits, such as improved customer satisfaction. Knowledge bases producing such benefits engage organizations to look at different ways these tools can improve business practices by improving how an organization can assess its effects and benefits. One particular study described the process of redesigning customer support for the high-tech economy, using an example of a hardware or software company that provided mission-critical data 24 hours a day, similar to the objectives of the Navys DS program. The study (Omar, Sawy, & Bowles, 1997) first identified the complexity of customer support for complex products and then explained design of a new system. The primary aims of this system were to decrease the response time, to develop an organized knowledge base, to reduce the cost of personnel, and to reduce the cost of the system. With these aims, the developed support system was successful in addressing the main concerns. During the Omar et al. (1997) study, customers e-mailed questions and the system searched for the key words, replying to the customer within two minutes. The system worked as a learning environment for the customers as well as the CSRs, who learned and responded to training quickly due to the new system features and the better-organized knowledge base. The problems solved by the experts added to expansion of the 46

knowledge base. A subject matter expert approved the content of the message and then added it to the knowledge base. For all the documents, the expert rated it when retrieving it, ensuring the correctness of the message over time. The authors show that such systems accomplished the task of quick turn-around time for customer satisfaction and cost reduction for the companies that used them. The study demonstrated the successful use of KM, built on an IT foundation, in improving customer satisfaction in a customer support role. Customer support has benefited from linking KM solutions to an IT interface, such as the Internet. With a web-based self-service option, customers can choose to access support knowledge directly through the Internet. Studies have shown that webbased customer support systems can be appropriate for internal and external customer support (Longoria, 1996; Tourniaire & Farrell, 1997). Research by Beatty, Shim, and Jones (2001) showed that web-based support provided many benefits for firms, including reduced transaction costs, reduced time to complete transactions, reduced clerical errors, faster responses to new market opportunities, improved monitoring of customer choices, improved market intelligence, more timely dissemination of information to stakeholders, and more highly customized advertising and promotion. Honeycutt (2000) found one of the principle outcomes of KM was Knowledge management turns experience and information into results (p. 3). Knowledge management has many other advantages in addition to simply serving as a method for organizations to gain competitive advantage. Some of the advantages of KM include better organization and use of institutional knowledge, reduction of staff time used searching for information, less duplication of work, more efficient customer service, and 47

more time spent improving services (Stoll, 2004, p. 56). Factors such as these can help contribute to how well an organization can compete in the marketplace. Collaboration is an important aspect of successful KM in customer support activities. Many collaborative tools can help organizations share best practices and volumes of customer feedback across different operating regions (Trepper, 2000). The ultimate objective of collaboration is to build an elementary, collaborative KM system that supports sharing and using the gathered information (Honeycutt, 2000) Organizations have a growing need for technological capabilities to address individual, team, and enterprise productivity. The capabilities required include applications that provide e-mail, discussion sessions, shared devices, group calendars, and schedules (Duffy, 2001), but also much more. Collaboration provides a way for CSRs to acquire information from experts. To be effective, collaboration applications also use profiling tools to find the best information sources from others who may be using these tools (Duffy, 2001). Knowledge management can provide a framework through which organizations create and modify their processes to encourage knowledge creation and sharing (Hanley, 2001). Organizations should not introduce these processes as new, independent business processes. Rather, users should understand the processes as created by applying KM to the core business operations. Organizations should develop interactive and sharing environments for nurturing ideas (Smith, 2001), especially in the business of customer support. Knowledge management should begin by focusing on the knowledge needs of an organization, and then should progress to sharing and creation of knowledge as part of the organizations culture. 48

Creation of knowledge as a part of an organizations culture has appeared in the literature to show its benefits. A recent study by Ribiere, Park, and Schulte (2004) indicated the most critical factor for successful KM is the organizations culture. The argument is that an organizations knowledge assets are just as important as their financial or physical assets. Knowledge assets are an organizations knowledge regarding how to efficiently and effectively perform business processes and create new products and services that enable the business to create value (Laudon & Laudon, 2003). An organization that can use the collective experience of employees will likely improve the competitive advantage of the business. Also, KM systems have a demonstrated ability to reduce redundancy and improve efficiency in organizations (Honeycutt, 2000). Best business practices of customer support depend on reliable KM systems. A study by Stoll (2004) identified many important factors necessary to create successful KM systems. Beginning with a committed team motivated to complete the project is most significant. The management and personnel must believe in KM, with an education and understanding of the tremendous advantage KM brings to the organization. Development of KM systems is not a rapid process. Instituting a useful KM system involves a significant amount of time and labor, which may mean modification to organizational priorities. Finally, organizations should measure key project variables prior to and after instituting the KM system to determine success. Staff will use knowledge in the KM system in various ways, and knowing the standard for the measured variables is essential (Stoll, 2004). No project can likely achieve success without defined, achievable goals.

49

One study found that the Navy was the only public sector organization recognized as a world-class leader in managing knowledge to deliver superior performance (Chatzkel, 2002). Many studies have shown growth in the number of studies on the impact of KM on performance in the public sector (Al-Hawamdeh & Tan Woei, 2001; Han, 2001; Mitchell, 2002; Office of the President of the United States, 2002). Similarly, increases have been clear in publications on the role of KM in military organizations (Hanley, 2001; Ross & Schulte, 2005; Sherman, 2002; Tefft, 2002). The KM component of the DS program serves to reduce the resolution time for assistance requests. Knowledge management also complements the DS program metrics to assist naval leadership decision making and mitigate systemic issues across the Navy customer support enterprise (Brandenburg, 2010). While solving customer support problems has always involved the use of knowledge, technology has made it possible to capture and apply support knowledge more effectively than ever before. Many firms now give customers direct access to support knowledge over the Internet. Emerging technologies may make it even more feasible to extract knowledge from support analysts and make it available to customers. Studies have shown that KM technologies can enable knowledge integration and contribute to efficiencies in organizations, but the studies also agree technology is not the most important element of knowledge integration (Schulte & Sample, 2006). Managing customer support knowledge involves a delicate balance between human and computerbased knowledge. Relying too heavily on humans is inefficient, but relying too heavily on computers is ineffective (Davenport & Klahr, 1998).

50

The U.S. Navys shore support infrastructure underwent major restructuring and alignment, and DS policies were a key enabler for this change (U.S. Fleet Forces Command, 2008). The Navys DS policy is vital to customer support and is dependent on KM. The policy is also fundamental to such varied initiatives as bringing new combat applications to sea, remotely monitoring equipment health, improving medical care, streamlining supply management, providing the Navy a continuous training environment for Fleet training, and optimal manning of new systems and platforms (CNO, 2007). The vision of the Chief of Naval Operations is for sailors to have the ability to use DS to manage their career, collaborate with subject matter experts, and access authoritative information in near real-time wherever they are in the world. The desired end state is to provide all personnel an equivalent experience, regardless of geographic location. Knowledge management is a major component of making the vision of the DS program a reality.

Information Technology This section examines the influence of IT on current customer support practices. In the Navy, the Chief of Naval Operations intent is to drive maintenance toward a centralized customer support call center enterprise (U.S. Fleet Forces Command, 2003). The objective is to implement IT and leverage connectivity to provide technical customer support from the shore establishment whenever possible to more efficiently maintain Fleet readiness (Commander, Space and Naval Warfare Systems Command [SPAWAR], 2005). Since the DS program began, IT has provided the foundation for streamlined access to the Navys customer support infrastructure (Brandenburg, 2001, 2010). 51

Technical customer support is a post-sales service provided to customers of technology products to help them incorporate a given product into their work environment (Das, 2003). Nakata et al. (2008) performed primary research in the domain of the IT relationship with technical customer support. The authors proposed a complex relationship in which IT capability indirectly (through customer orientation) and interactively (with intra-organizational trust and information systems services quality) improves business performance. The study grounded the model in a socio-technical view, and tested the model through a survey of 189 executives in a wide range of firms and industries. The findings largely supported their proposed model, indicating that IT capability has both direct and indirect effects. Results from the study indicated IT management capability interacts with organizational trust in predicting customer satisfaction. A trusting environment, they concluded, magnified the positive impact of IT capability (Nakata et al., 2008). The indirect path from IT capability to business performance provides evidence that IT systems can influence key performance metrics such as net profits, customer retention, and product quality, but this effect is dependent on customer-focused activities (Melville et al., 2004). Therefore, the relationship from this perspective is that IT capability is a technological input that combines with customers feelings of trust, which is a non-technological input. Nakata et al. (2008) concluded that the trusting relationship facilitated organizational routines, such as customer behaviors, and subsequently strengthened business performance through market and financial results. Another implication of the Nakata et al. study was that managers should emphasize computer technologies that support customer information handling and related 52

work flows. The research found IT capability contributes to business performance indirectly through customer satisfaction (Nakata et al., 2008). This result is significant because it implies an unfocused expansion of a computer infrastructure is unlikely to produce gains. Rather, the authors found, firms should center an expansion in IT capability on technologies that help improve customer information routines. For example, the organization can build intranet platforms to share best practices for customers, or can acquire software that determines purchase patterns, or as the Navy has done for the DS program, organizations can create websites to automate the collection of detailed customer data. In relation to the current study, the IT successes of the Sailor to Engineer program of the late 1990s, which eventually grew into the DS program, greatly influenced the Navy doctrine on customer support and KM. The earlier program built a trusting relationship that carried to the DS program. The business objective of the Sailor to Engineer program was to increase the productivity of Fleet maintenance by providing the latest validated technical information in the most cost-efficient way possible (Hanley, 2001). The initiative collected maintenance information and procedures from engineers and posted them in a common repository on a website. The participants in the program had the most recent documents, a knowledge base with both problems and solutions, and immediate access to a help desk (Hanley, 2001). Customer support research showed that IT initiatives such as this significantly improve an existing organizations efficiency, and they have a positive association with financial performance (Zahra, 1991). Barney and Ketchen (2001) reported that IT systems do not lead to competitive advantage unless knowledgeable people who understand the benefits operate the IT 53

systems. If customer support personnel use the IT systems often, then the group becomes more proficient with distinctive resolution style and methods of coordination that are unique to the business. The outcome can be exceedingly competent teams of professionals, which can help push the functions of IT tools toward meeting or exceeding customer requirements. Similarly, the DS program is in a position to build its reputation on many years of experience of centralized customer support. Customer support can take many varied forms, including field service, mechanical repair, and telephone help desks. The U.S. Navy provides technical support for combat systems in three ways. These three methods of support are face-to-face, by telephone, or by web-based methods such as e-mail and chat. Face-to-face, also referred to as organic or local support, occurs when the technical support representative is in the same physical location as the users of the system and provides the support in a face-to-face environment. The traditional method of remote technical support is by telephone. In this process, the customer calls the technical staff at a support center and the technicians try to solve the problem by trouble-shooting over the telephone. The newest method of remote technical support uses web-based technology and provides technical support by e-mail or chat. In this process, a user sends an e-mail describing the problem to the support center and receives a response from the support technicians. Another type of web-based help is Internet chat, where the technician uses common chat tools to converse with the customer having a problem. Studies have shown that funding to manage IT has grown tremendously in the past three decades. The investment dedicated to IT grew from 15% of an organizations budget in the early 1980s, to more than 30% in the early 1990s, and to almost 50% by the 54

late 1990s (Carr, 2003). The amount of funding for IT support is only one aspect to investigate. The fact that organizations have invested so much in IT is remarkable in itself. Successful organizations invest their budgets where they expect to receive a return. To understand why investment in IT is so high, it is important to understand what IT currently provides to organizations, and what organizations expect IT to provide in the future. The role of IT and of information managers seems to be changing significantly. Research by Myburgh (2002) traced the evolution of the IT function and the information management professional through five distinct stages: (a) paperwork management, (b) management of corporate automated technologies, (c) management of corporate information technologies, (d) business competitor analysis and intelligence, and (e) strategic information management. Rather than records being the source of evidence, records are containers of information used to generate value, thereby supporting decision making and action (Myburgh, 2002). The new model for IT technology is one of dynamic information, replacing the old model of information as a static resource that did not require management. The growth of an organizations problem-solving experience into using the IT tools to coordinate work, such as a trouble-call system, is extremely beneficial for KM and customer support. Knowledge-sharing systems often treat knowledge capture as an activity separate from customer support and service. Das (2003) found the additional effort of knowledge collection often appears as an uncertain investment. In the current work environment, however, most work takes place through digital media, and preserving

55

the knowledge and experience of daily activities in a searchable archive often represents a simple method of KM. Regarding customer support call center management, research by Doomun and Jungum (2008) promoted a flexible business process that organizations could model and recreate in a cost-efficient way. The research found four potential reasons for call center reengineering initiatives: improved performance and control models needed for call centers with multiple-server systems; problems associated with hiring, training, and scheduling personnel; better service quality for the customers; and improving call center performance with quantitative statistical analysis of data. The study showed that modeling a customer support call center for efficiency seemed to be a compromise between the size of the model and the complexity of the call center (Doomun & Jungum, 2008). Decreasing the size of the model could assist in arriving at an acceptable method to measure efficiency. Decreasing the scope of measurement, however, risked overlooking influential system components. These influential variables could sometimes be so important that they affected the proposed solution to the problem (Doomun & Jungum, 2008). The researchers concurred that the costs, time, and effort applied to the new IT approach to the customer support process are worth the investment, because the improvements in efficiency are measurable in fewer errors, cost avoidance, or time saved from use of better technology. Piccoli et al. (2001) maintained that the real opportunities for sustainable competitive advantage were with those who recognized the importance of using IT to improve service in all phases of the customers involvement with an organizations 56

product or service. The researchers developed a customer service lifecycle framework to assist business leaders to think creatively about the use of IT and the Internet as tools for the creation of competitive advantage. By redefining the thinking about an organizations relationship with its customers, businesses can identify their strengths and highlight needed areas of improvement for IT management (Piccoli et al., 2001). For example, U.S. Navy literature indicates that poor documentation had been a common Fleet maintenance problem prior to the introduction of the Navys centralized customer support initiatives (Hanley, 2001; U.S. Fleet Forces Command, 2003). The processes were inefficient and prevented engineers from collecting new data and creating updated procedures. Many programs could not move forward with better maintenance plans because of decreased funding. Websites needed to be designed and processes needed to be updated to give sailors quicker access to the most current and reliable documentation. Sailors and Marines also needed rapid access to help desks with a dependable team of engineering experts. Maintenance engineers frequently developed new procedures while working on the fielded systems on ships and at shore sites. The new procedures did not become part of the existing technical manuals, so the documentation failed to capture the engineers knowledge, insight, and proficiency. According to a KM case study, engineers often developed their own procedures and documented them in personal notebooks (Hanley, 2001). This unique method of repair led to a maintenance standardization problem because the quality of the repair depended on who performed the repairs with which manuals. The Navy needed a method to capture valuable information from the maintenance engineers to improve the productivity and consistency of repairs. They also needed a 57

way to distribute the information faster while ensuring that the technical manuals remained valid. An IT solution proved to be optimal. Other problems that analysts identified were the long lead times needed to distribute new Fleet maintenance procedures and the obligation to offer the same quality level in technical manuals but with less funding. As the aging workforce retired, the Navy was also concerned about losing a significant amount of knowledge and subject matter expertise. Research has demonstrated that organizations able to react and respond to a dynamic environment through product and process innovations are better able to maintain a competitive advantage (McCrea & Betts, 2008). With a focus on innovation, the Navys new Sailor to Engineer initiative designed a system concept to provide shore-toship technical support. The specific objectives were 1. 2. 3. Provide automated and rapid access to technical and logistics data to sailors, Replace numerous contradicting Web sites with a single coordinated site, Increase efficiency of support operation to compensate for reduced In Service Engineering Activity (ISEA) funding. (Hanley, 2001)

The Sailor to Engineer program was instrumental in building necessary ship-toshore connections. The initiative was successful at giving technicians on ships a way to communicate directly with experts at waterfront support organizations and other industry contractors. These maintenance improvements affected key shipboard systems, including widely used systems for satellite communications, weather prediction, electronic surveillance, and intelligence gathering. The technicians at sea became proficient at using the websites to make logistics and engineering requests. The resources available to the ships included a customer support help desk and frequently asked questions (FAQs) 58

where they could get technical information and critical logistics data, such as technical manual updates (Hanley, 2001). The network used both standard Internet and secure Internet, along with regular telephone line connections from sea to obtain the assistance needed. Organizations in commercial industry recognize the importance of metrics to help define objectives and performance expectations (Ahmed, Lim, & Zairi, 1999). For example, companies design appropriate quantitative metrics to measure the effectiveness of a ship building facility and all the various components to better understand and explain the process. From the very beginning of the Sailor to Engineer program, metrics played a significant role. The program identified, collected, and analyzed many different measures to improve processes. The primary reason for collecting the metrics was to determine if the program reduced the time and cost to resolve issues (Hanley, 2001). Also, leadership wanted to ensure the community was able to share the latest information from all the subject matter experts. Some of the different measures collected included cost to distribute the technical manuals, time spent collecting information, collection of anecdotes, how many apprentices the experienced technicians mentored, and customer surveys that determined usefulness. The DS program, similar to the Sailor to Engineer initiative before it, relies upon a recurring and reliable process for managing customer support requests in a timely, effective manner (Brandenburg, 2010). This process began with a mutually agreed upon set of business rules governing program operations, with a key being global access to current support request information (NAVSEA, 2009). An ongoing expansion of a collaborative, shared, data environment workspace has achieved this capability. In the 59

first two years of the DS program, the Navy discovered that travel costs for regional maintenance center technical representatives decreased approximately 50% (COMNAVSURFOR, 2005). The goal of the program is that any ship, independent of location, would receive its initial, coordinated, reach-back assistance from the shore support infrastructure within as little as two hours when priority dictated (SPAWAR, 2005). To accomplish this, all DS requests require standard information for entering a DS trouble ticket into unclassified e-mails or the database. One purpose of standard information is to ensure collection of accurate metrics while focusing technicians on the high-priority requests. Three years after the DS program began, naval leadership re-emphasized the Fleet policy regarding DS for all naval ships; specifically, that the DS program must define contact information and reporting requirements for surface ships and their regional maintenance centers. DS became both a process and a toolset providing connectivity to and from the customer through a workflow process (Brandenburg, 2010). The DS process tracks support requests throughout the entire customer support infrastructure of the Navy, and an information repository supports the IT infrastructure. In past decades, military and commercial customers in need of technical support would use the telephone to call an expert at a help desk. The cost of providing support to customers, partners, and employees placed an increased burden on corporate profitability (Wolf, Alpert, Vergo, Kozakov, & Doganata, 2004). The pressure to reduce costs combined with the rapid growth of the Internet caused companies to move support from human experts to the Internet and other IT systems. Ehrlich and Cash (1994) posited that the skills of a help desk organization might be hard to replicate with on-line self-service, 60

but increased human support costs would force many companies to look at cost-effective web-based solutions. The tools of technology provide the support functions to KM processes (Hanley, 2001). In some cases, however, the processes themselves rely on the tools to make collaboration a reality for the customers. The Navys Sailor to Engineer program was a good example of this concept of tools enabling the process. The Sailor to Engineer program depended on Internet collaboration and repositories of information to connect shipboard technicians to shore-based support engineers, and demonstrated potential for the DS program. Currently, web-based support is an issue of business competence used to gain a competitive advantage over another company, but in coming years, such support will likely become the norm and all companies will need online customer technical support as an addition to the typical online stores (Singh, 2008). The Sailor to Engineer program collected a multitude of different metrics, including those from the help desks that supported the various systems. Help desk metrics included ship class analysis (or site class analysis), most frequent type of Fleet issue, equipment issues, mission warfare issues, resolution type, source of issues (e-mail, meeting, Naval message, etc.), and the source of the support. After carefully analyzing these performance measures, the program leadership team at NSWC found many inefficient processes prevented unobstructed information sharing among the organizations involved in the initiative (Hanley, 2001). As a result, the NSWC team changed the kind of information collected and the method in which they arranged it on the different websites.

61

During the initiative, many web-based tools assisted the engineers and ship crews in collaborating from different locations around the world while discussing and viewing the same technical documentation. This kind of collaboration at sea saved both repair cost and time spent researching improvements to the manuals (Hanley, 2001). Sailors at sea also used the online knowledge base to find the correct subject matter expert for particular problems. Independent research has shown that the web environment allows customers to learn from their mistakes and to overcome some difficulties associated with traditional media (Lenz, 1999). Organizations need to develop their own requirements for KM before using IT to improve customer support. Accomplishing this task involves defining the role of IT in developing a KM system. Duffy (2000) sees IT as managing the storage and access of documents. IT usually maintains the databases, hardware, and software access points, and the survivability of information. Any KM project can fail when organizations view IT only from the technical side. Customer service representatives must be aware and educated in KM processes to gain a better appreciation. After accomplishing this task, IT can play a major role in an organizations KM efforts. From a theoretical viewpoint, this concept means that IT provides the foundational hardware, software, and communications that make information management possible in Bartczaks (2002) hierarchy of knowledge transfer components.

Summary of the Literature Review The theoretical framework of Bartczak (2002) proposed that knowledge transfer requires knowledge capture, knowledge management, and knowledge distribution 62

through IT. The components of this framework need separate approaches and technologies that organizations must integrate correctly to achieve the knowledge transfer necessary in a military organization (Bartczak, 2002). In the field of customer support, knowledge transfer is the successful resolution of support requests, preferably by a customer support call center rather than by a field technician. The relevant literature demonstrated how the domains of KM and IT, when properly integrated, influenced the third domain of customer support, specifically in improving knowledge transfer at the call center. The literature also showed that the intent of the Navy DS program was to integrate KM and IT to create efficiencies in customer support. The literature revealed that KM is not just the application of IT. Rather, KM is primarily about people, processes, organization, and culture (Ward, 2005). Information technology is a vitally important enabler that opens the way for development of improved processes, more efficient organizations, and empowered people. In customer support, IT can provide KM with an organizational culture of learning and knowledge sharing to solve customer problems. The current study uses variables that can quantitatively demonstrate the successful integration of KM and IT to achieve knowledge transfer in a military customer support environment.

63

CHAPTER 3. METHODOLOGY

This chapter presents the study methodology. The research design and methodology are the first subjects addressed, followed by the population and sample used in the study. The chapter also contains discussion of data collection procedures and analysis of the data. The purpose of the study was to assess the intervention of KM and IT on competitive advantage within a military customer support program. To accomplish this, the research analyses determined the effectiveness of the Navys DS program regarding the support resolution method and the support resolution time, and the relationship of the program to level of service, if any. The intent of the analysis was to determine if a relationship existed between the number of years since program implementation and the percentage of trouble tickets resolved using remote assistance methods, or between the number of years since program implementation and trouble ticket resolution time. An additional intent of the study was to determine the correlation of the DS program with customer satisfaction by quantifying a relationship between customer satisfaction and the method used to deliver the support. The research processes investigated the correlation between customer satisfaction and resolution time. The research established the DS program level of service to the Navy-Marine Corps customers by analyzing any correlation between the number of years since program 64

implementation and customer satisfaction, as measured by trouble ticket satisfaction survey results. The following hypotheses applied in the study: H10: There is no significant relationship between the program year and resolution method. The program year is the independent variable and the dependent variable is resolution method (distant/local). H20: There is no significant relationship between the program year and trouble ticket resolution time. For this hypothesis, the independent variable is the program year and the dependent variable is resolution time. H30: There is no relationship between the program year and customer satisfaction. The independent variable is again the program year and the dependent variable is customer satisfaction. H40: There is no significant relationship between the resolution method and resolution time. The resolution method is the independent variable and the dependent variable is resolution time. H50: There is no significant relationship between the resolution method and customer satisfaction. The resolution method is again the independent variable and the dependent variable is customer satisfaction. H60: There is no significant relationship between the trouble ticket resolution time and customer satisfaction. The resolution time is the independent variable and the dependent variable is customer satisfaction. From these hypotheses, the following alternative hypotheses are formed: H1a: There is a relationship between the program year and resolution method.

65

H2 a: There is a relationship between the program year and trouble ticket resolution time. H3 a: There is a relationship between the program year and customer satisfaction. H4 a: There is a relationship between the resolution method and resolution time. H5a: There is a relationship between the resolution method and customer satisfaction. H6 a: There is a relationship between the trouble ticket resolution time and customer satisfaction.

Research Design The design of the study was a postpositivist paradigm of inquiry. Postpositivism holds a deterministic philosophy in which causes probably determine effects or outcomes (Creswell, 2009). The understanding of a phenomenon emerges from observation and measurement of an objective reality. In this paradigm, the data, evidence, and rational considerations shape knowledge (Phillips & Burbules, 2000). The intent of such a research design is to develop relevant, true statements to explain the situation of concern, while the researcher advances the relationship among variables and poses them in terms of questions or hypotheses (Phillips & Burbules, 2000). Two U.S. Navy DS program enterprise databases were valuable in conducting the research. To statistically test the hypotheses, the study design was a quantitative research approach based on secondary analysis of data gathered from the U.S. Navys customer support trouble ticket database and the customer satisfaction database. The secondary data analysis involved combining information from both databases to examine the 66

research questions. The analysis of the trouble tickets used a non-experimental randomized selection design to organize the data. The analysis of the customer satisfaction database was also non-experimental, but used all of the satisfaction surveys available due to the relatively low rate of return. One of the advantages of secondary analysis is that it is efficient (Trochim, 2006). Another design advantage was that the study used data collected over many years, which allowed extending the range of the study to a national level understood throughout the naval service. A disadvantage of secondary data analysis was that it used data collected by others, so potential problems in the original data collection are unknown (Trochim, 2006). In keeping with the postpositivist view, the intent of the research was to reduce the ideas into a small, discrete set of ideas to test, such as the variables that comprised the hypotheses and the research questions (Creswell, 2009). The key variables describing the ideas from the research questions were available from the two customer support databases under study; definitions appear below. The knowledge that develops using the postpositivist lens is from the careful observation and measurement of the objective reality (Creswell, 2009).

Key Variables During the eight years of the program, technicians recorded over 2.5 million trouble tickets, and customers returned thousands of customer satisfaction surveys on those trouble tickets. Variables chosen from the Navys trouble ticket database and customer satisfaction database assessed the intervention of KM and IT on competitive advantage within a military customer support program. In this study, the Chief of Naval 67

Operations direction to more efficiently maintain the combat systems on ships and aircraft while maintaining or improving the level of service (U.S. Fleet Forces Command, 2003, p. 2) defined the program goals. Studies have demonstrated that both KM and IT can positively influence efficiency and level of service (Barney & Ketchen, 2001; Honeycutt, 2000; Laudon & Laudon, 2003; Schulte & Sample, 2006). Conceptually, the program goals of efficiency and level of service related to a successful intervention of KM and IT to achieve the theoretical view of knowledge capture and knowledge transfer in customer support. Supporting this research design, the major goal of the DS program has been to remotely assist customers by providing the information to maintain, troubleshoot, and repair systems, which should reduce the number of required physical visits to ships and other military sites (SPAWAR, 2005). Additionally, industry has long demonstrated that the use of centralized customer support providers reduces dependency on costly local and organic support while maintaining the same level of service. Whether resolution of the problem was possible without escalating the trouble ticket to a sites field technician was a primary focus of the study. Relevant data in the Customer Support Request Form, or the trouble ticket, included the callers name, date, site, system, problem, problem resolution, and closure time, as shown in Appendix A. Also in the information was whether the problem reached resolution without sending a technical representative to the ship or shore facility. The variable was Type of Assist on the trouble ticket and denoted whether remote support was necessary. Operationally defined, the variable had three possible values, which the CSR selected from a dropdown list on the trouble ticket form. The values are on site, 68

transition, or distance support. On site means the trouble ticket reached resolution by an on-site technician in the local area. Transition means the trouble ticket reached resolution by a technician who traveled to the ship or site from outside the local area. Distance Support means the trouble ticket reached resolution by remote support resources (telephone, e-mail, web).

Table 1 Variables Exported from the CRM Database (Significant Trouble Ticket Variables)
Variable Ticket Tracking No. DateTime Open DateTime Closed Subject I.D. Customer Site Source of Support (CSR site) Resolution Method Type String Date Date String String String String Measure Nominal Scale Scale Nominal Nominal Nominal Nominal

Conceptually, the resolution method variable best defined whether the theoretical concept of knowledge transfer, based on KM and supported with IT, had been effective in migrating Navy customer support to centralized call centers. Knowledge management provided the framework through which the Navys DS program modified the processes to encourage knowledge creation and sharing with centralized support. Applying KM to the core business of customer support, resolution method was best at showing if the practice

69

of developing shared solutions was effective, or if physical visits to ships and other sites remained as common as in the past. Another key variable in the study was customer satisfaction. Based on the stated goals of the DS program, the level of service was an important measure (U.S. Fleet Forces Command, 2003). In this study, the definition of the concept of service level was overall customer satisfaction. Studies have shown that customer satisfaction surveys are a viable measurement instrument to provide reliable and valid statistical data to describe level of service (Berman, 2005; Berry, 1991; Khatibi et al., 2002; Zeithaml, 1988). The customer best defined level of service, since the customer was the recipient of the service interaction and was in the most appropriate position to judge the quality or quantity of service from the support technician who rendered assistance. In the DS program, the great majority of customers were the sailors and Marines who initiated trouble tickets by asking for assistance. These customers returned customer satisfaction surveys (see Appendix B) via e-mail to Navy customer support centers. The surveys questioned the customer on service quality, including completeness, timeliness, professionalism, and overall satisfaction. Operationally defined, the overall customer satisfaction variable had five possible values: strongly agree, agree, neutral, disagree, or strongly disagree. Conceptually, the variable recorded the quality of service level, and the quality of an organizations service level, as perceived by its users, was a key indicator of an organizations success (Pitt et al., 1995). From a theoretical perspective, the variable determined if a relationship was present between KM/IT advancements and perceived level of service. In the theoretical framework of Bartczak (2002), the value of the overall customer satisfaction variable demonstrated an association with the 70

intervention of a KM component hierarchy. If the value was high, then the intervention was successful. If the overall satisfaction was low, then the KM/IT intervention of the DS program was weak.

Table 2 Variables Exported from the Customer Satisfaction Database


Variable Name Ticket Tracking No. Create Date (Survey) DateTime Closed (Trouble Ticket) Customer Site Knowledge Question Score Professional Question Score Completeness Question Score Timeliness Question Score Overall Satisfaction Question Score Type String Date Date String Numeric Numeric Numeric Numeric Numeric Measure Nominal Scale Scale Nominal Ordinal Ordinal Ordinal Ordinal Ordinal

Surveys are among the most widely used instruments by information systems researchers for a variety of reasons. These reasons include simplicity of administration and scoring, ability to generalize responses to other members of the population, and ease of reusability (Newsted et al. 1998). Survey methodology is a valid and reliable measurement of social phenomena, and surveys are one of the primary methods on which organizational researchers can usually rely (Eaton & Struthers, 2002). However, researchers have identified various limitations of survey-based research. For example, although survey-based research may be the most appropriate method of data collection to

71

describe a population, survey research is generally strong on reliability but generally weak on validity (Babbie, 1995). The third key variable was the trouble ticket resolution time. Calculation of the resolution time, or time required to solve a support request, was from the date and time of opening the trouble ticket and the date and time of closing the ticket. Resolution time was simply the time elapsed between the two recorded times. Conceptually, resolution time was a measure of efficiency and a measure of service level. As a measure of efficiency, resolution time demonstrated the required labor a support technician or a help desk devoted to solving a customers problem. The CSR labor cost is the single largest component of the cost of providing customer support (Das, 2003). In this sense, the variable is organizationally related, and researchers can use it to demonstrate cost savings and other use of resources. As a measure of service level, resolution time demonstrates how long a customer had to await repair after making a request. The resolution time of reported problems strongly affects s customers assessment of customer support service (Zeithaml et al., 1990). In this sense, the resolution time is service related, and the customer knows and understands it as a quantifier of service level. Customers can reasonably expect expedient service, and studies have shown that customer satisfaction is much easier to accomplish when problem resolution is as expediently as possible (Byrnes, 2005; Meltzer, 2001; Newell, 2000; Read, 2002). Operationally, the expression of resolution time variable was in hours. The fourth key variable was the year of the program. The year of the program is the closing date of trouble ticket. Conceptually, the year of the program defined the stage 72

of intervention from KM and IT tools in customer support. At program initiation, KM and IT tools were under development for the DS program and had not yet influenced the rate of trouble calls resolved using remote methods or the level of service (customer satisfaction). In the latter years of the program, the intervention of KM and IT through the DS program may have demonstrated a relationship with the other variables. Operationally, the year of the program included the years 2002 through 2009. Table 3 provides a summary of the key variables in the study and their relationships to the hypotheses.

Table 3 Key Variable Relationships


Hypothesis H10 H20 H30 H40 H50 H60 Independent Variable Program Year Program Year Program Year Resolution Method Resolution Method Resolution Time Dependent Variable Resolution Method Resolution Time Customer satisfaction Resolution Time Customer satisfaction Customer satisfaction

Setting The research study procedures did not control nor influence the database of customer support trouble tickets, customer satisfaction surveys, or other data targeted for study. The research setting was primarily at a DoN computer terminal, analyzing only the historical project data of closed customer support trouble tickets of U.S. Navy combat systems and related customer satisfaction surveys during an eight-year period. The 73

setting allowed observations of the quantifiable trends of customer support trouble tickets and satisfaction surveys during this period using documented report histories. Effectiveness of the DS program is dependent on global, Navy-wide access to current trouble ticket information. Knowledge management and IT attempt to drive the program to achieve that goal. The customer support call centers in the DS network use standard CRM software to share trouble ticket databases, which creates a common operating perspective across all support organizations throughout the Navy. The following scenario depicts the method of knowledge capture and knowledge transfer used in the DS program. A sailor or Marine uses every local resource available to repair a combat system on a ship or shore site. When the sailor or Marine exhausts all trouble-shooting procedures, adequate repair skills are not available, or repair parts are not onboard, the sailor or Marine typically calls or e-mails the Global Distance Support Center (GDSC). Support requests may be submitted in a variety of ways, including e-mail, secure and non-secure telephone, fax, the DS website, chat sessions, Naval message, and customer walk-in (NAVSEA, 2009). When the call center receives the support request, the technician creates a trouble ticket in the DS customer relationship management (DS CRM) database. The data on the trouble ticket then becomes accessible by all the customer support centers in the DS network. A support request may go to a networked customer support call center or directly to the GDSC call center. Regardless of whom the customer contacts, the customer support center initially contacted creates the trouble ticket in the DS CRM system. The system automatically generates a tracking number when saving the transaction in the 74

system, and the tracking number remains with the trouble ticket as a way of identifying an individual issue. The sailor or Marine gets assistance from one or a combination of four tiers of Navy DS service (NAVSEA, 2009). The four tiers of support are Tier 1 call center; Tier 2help desk; Tier 3subject matter expert; and Tier 4shipyard or original equipment manufacturer. A new trouble ticket includes data field entries for the name of the customer and the customers location, which is typically a ship, squadron, Navy base, or other shore site. The ticket also includes important contact information, the system name, a problem description, and an assigned priority of the request, ranging from Priority 1 (critical) to Priority 4 (quality of service). If the customer support call center cannot resolve the issue, the call center immediately transmits the trouble ticket to the appropriate help desk for repair. All help desks are within the global network as part of the distributed DS CRM system. If the assigned help desk determines the issue requires action from another help desk, the technician can easily assign the trouble ticket to the appropriate source of support. If the DS help desk cannot resolve the issue using remote resources such as telephone or e-mail, the help desk advises the ship/customer of the limitation and the planned corrective course of action, which is usually to deploy an assistance team or to await the ship or squadrons return to homeport. When a trouble ticket problem reaches resolution, the help desk that solved the issue forwards the resolution to the customer via the most appropriate means. Means include telephone, e-mail, chat session, or Naval message. The help desk documents the trouble ticket with details of the assistance rendered. In the case of a problem that 75

required a physical visit to the site, the technician completes the trouble ticket resolution method, or type of assist field, using on site or transition assistance. Otherwise, the trouble ticket resolution is distance support noted on the trouble ticket. The resolution method variable was a critical selection to determine if the DS program had been effective using remote resources to solve problems, and was a critical variable for this research to determine if KM and IT tools have made a difference. A sufficient level of detail is available for the sailor or Marine customer to determine whether the solution effectively satisfied the initial support request. When all the actions are complete, the help desk places the trouble ticket in a closed status. A closed trouble ticket indicates completion of efforts to address the issue, and the trouble ticket moves to permanent storage in the DS CRM database. The top 25 support providers in the DS program appear in Appendix C, listed in order of most trouble tickets closed. When closing a trouble ticket, the DS CRM database automatically generates a customer satisfaction survey to the originating customers e-mail address recorded when opening the ticket. The survey displays the trouble ticket number to which it refers and asks five questions about CSR knowledge, CSR professionalism, request completion, timeliness, and overall satisfaction. The responses to each question are selected from a 5point Likert-type scale, ranging from strongly disagree to strongly agree. The customer has the option of either disregarding the survey request message or completing the webbased survey and returning it to the customer support center by e-mail. At the customer support centers, the returned satisfaction surveys reside permanently on a centralized web server for analysis, and technicians can reference the original support request by the trouble ticket number on the survey. 76

Sample The target population was the records in two Navy customer support databases. The first was the database of combat system trouble tickets recorded from 2002 until 2009, containing over 2.5 million cases. A simple random sampling method functioned to select between 7,000 and 10,000 trouble ticket cases from each year since program inception to ensure a 95% confidence level (1% confidence interval). Proportionate allocation enabled sampling of a fraction of tickets from each year, proportional to that of the total population. The population of trouble tickets increased during each year of the DS program, so the sample size increased proportionally. The merged stratified random sample file of all years was approximately 70,000 trouble ticket cases. The U.S. Navys designated Naval Sea Logistics Center representative authorized permission to use the data, and retained review authority of data and analysis. Although the data were unclassified, they could have been sensitive, since much of the data could have been about classified programs. The actual names of systems, nature of problems, and other sensitive data did not enter or apply to the study. Collecting the data required access to sensitive and proprietary information that included ships repair and maintenance information. Sampling followed a defined procedure. An electronic (MS Excel) spreadsheet was the repository for the entire population from each year after downloading from the customer support web server. The download included only the specific variables pertaining to the study. These variables included the name of the system, the opening date of the trouble ticket, the closing date of the trouble ticket, the customers site, the general nature of the problem, and most importantly, the type of assistance used to 77

resolve the problem (remote support or local support). Excluded from access was any personal customer contact information. All personal data fields on trouble tickets were exempt prior to data transfer for analysis, a requirement under the Privacy Act of 1974 for using military personnel information. All relevant data were on the electronic spreadsheet. The PASW statistics software imported the MS Excel files for analysis. The PASW software imported the files according to year, creating one PASW file for each year of the DS program. Individual files for each year of the program allowed for valid and reliable sampling for each year of the program, since the population size increased each year. A web-based sample size calculator available as a public service determined sample sizes (Creative Research Systems, 2009). The random sampling functions in PASW software extracted a simple random sample of between 7,000 and 10,000 cases, proportional to the population of cases for each year. The aim of the sample was to achieve a 95% confidence level with +/-1% confidence interval. The samples from each year merged into a single PASW file of approximately 70,000 cases, creating a stratified random sample by year with proportional allocation. The estimated population and sample sizes from each year are in Table 4.

Table 4 Estimated Population of Cases and Corresponding Sample Sizes for Each Year
Year Population Sample 2002 30,000 7275 2003 90,000 8678 2004 200,000 9164 2005 200,000 9164 2006 400,000 9379 2007 400,000 9379 2008 500,000 9423 2009 700,000 9474

78

The second database contained records of customer satisfaction surveys related to the trouble tickets. All the satisfaction surveys available underwent analysis. The survey information was downloaded into MS excel spreadsheets and then imported into PASW statistics software for analysis. The download included only the specific variables pertaining to the study, such as satisfaction scores. Access to any personal customer

contact information was unavailable, since personal data fields on satisfaction surveys were exempt prior to data transfer.

Instrumentation and Measures The DS program depends on collaboration using IT and KM to connect sailors to shore-based support engineers when needed. Since the DS program began, the enterprise had collected volumes of different help desks metrics that were available for investigation and research. The metrics included those for resolution times, resolution satisfaction, most frequent type of Fleet issue, ship class analysis, and the source of issues. The Naval Sea Logistics Center provided customer support data and satisfaction surveys for the study from the U.S. Navys GDSC customer support database and customer satisfaction survey database. The GDSC serves as a central repository for nearly all Navy and Marine Corps customer support requests about computer-based combat systems. The GDSC was the recommended primary entry point for Navy sailors and Marines seeking technical assistance. Regardless of whether the support request went to the GDSC or was from a separate networked support center, the details were within the DS CRM system and accessible within the shared data environment (SDE) on a trouble ticket form, also referred to as a case. From that point of inception through to 79

completion and closure, the ticket is within the DS CRM system for processing and management. Under the SDE concept, the customer support call centers within the Navys DS network use standard CRM software to share contact center databases, creating a common operating perspective across all support organizations and functional disciplines (NAVSEA, 2009). After documentation of a support request, the associated data are accessible by all the organizations in the DS network. If a help desk is not yet part of the DS enterprise integration, the site periodically forwards a file of common data elements to the GDSC so the specific support request data can be included in the Navy-wide DS metrics.

Data Collection An electronic, web-based form tracks each message or phone call received from military customers in a customer support database. Storage of the forms is within a single, integrated DS CRM database that employs commercial off-the-shelf software. The forms allow recording of unique information about the incident, such as name and phone number, as well as selection from a list for standard data, such as the customers site, system, and system components for failures. The trouble ticket form is the vehicle for documenting the customer requirement, forwarding the requirement to the appropriate source of support, reporting completed action, and generating metrics to evaluate both the timeliness in processing and the effectiveness in successfully resolving support requests (NAVSEA, 2009).

80

Trouble ticket volume increased from under 20,000 tickets per year in 2003 to over 700,000 tickets per year in 2009. A stratified sampling of the trouble ticket population, therefore, maintained a 95% confidence level with +/-1% confidence interval for each year. In the later years analyzed within the study, sample sizes were over 9,000 cases. The trouble ticket form allowed tracking of other data, such as response times, number of site accesses, frequency of use, and number of users, in addition to number of help desk calls. Data collection originated entirely from the customer relationship management (CRM) database of trouble ticket information and the customer satisfaction survey database. Generally, the procedure exports specific data fields into multiple electronic spreadsheets, imports those data into multiple files using the PASW Statistics for Windows (version 18) software tool set, and then draws a simple random sample of cases from each year. Those random sample files merge into a single PASW file to perform the quantitative analysis. Merging the random sample files from each year produced a single PASW file of approximately 70,000 cases. The system automatically e-mails a 5-question customer satisfaction survey to every customer when closing a trouble ticket, and the e-mail includes the closed trouble ticket number. Due to response rates, satisfaction surveys are not available for every trouble ticket. The study analyses included all available customer satisfaction surveys from the DS CRM database servers. The process included downloading the survey data into MS Excel spreadsheets, importing the data into separate PASW files, and then merging into a single PASW file for analysis. Unlike the trouble ticket database, a sample was not necessary for the satisfaction surveys. The trouble ticket number was the 81

identifier used to cross-reference the survey to the trouble ticket from which it originated. The PASW statistics application then held the created merged file of cases consisting of trouble ticket values and satisfaction survey values based on the common trouble ticket number.

Data Analysis A statistical model generated the expected values of the study using existing data and research. The goal was to detect statistically significant shifts in rates rapidly and reliably by creating numerical criteria for objectively, rather than subjectively, evaluating and making decisions about the percentages (Lohr, 1998). The research design was a quantitative methodology using descriptive statistical methods for analysis because of the large data population. Descriptive statistics are preferable when presenting quantitative descriptions in a manageable form, especially when simplifying large amounts of data in a sensible way (Trochim, 2006). A strong summary of the data enabled comparisons across years. Parametric statistics were the preferable analyses whenever possible because parametric tests are more powerful than nonparametric tests for distributions that are normal or close to normal (Zimmerman, 1998). Testing of the dependent variables, however, revealed they did not have a normal distribution. Therefore, nonparametric tests were the analyses used throughout the study. For all the tests in the study, the level of significance was set at p < .05, or a 95% confidence level. The research also incorporated inferential statistics, which are able to draw conclusions and make inferences and generalizations regarding differences among variables. Inferential statistics were appropriate because the research results reached 82

conclusions that extended beyond the immediate data alone (Trochim, 2006). For example, the study used inferential statistics to make judgments of the probability that an observed difference between years was a dependable one or one that might have occurred due to factors not considered in this study. Using a simple random sample, the analysis collected data from a broad section of projects that used various computer-based combat systems. Due to the distribution of the dependent variables in each hypothesis, a nonparametric chi-square test of independence served most often as the statistical test for the analysis. For example, the null hypothesis in the first test investigated for differences, or independence, between the years in the study. The Pearson correlation coefficient functioned as the measurement of the strength and direction of association between the variables (Cooper & Schindler, 2008) and effect size measured the magnitude of differences between the means of groups. The population of interest was the customer support records of computer-based combat systems under the umbrella of the DS program. The samples were from a population of over 2.5 million trouble ticket cases concerning Fleet problems that had the option of using either DS or on-site technical assistance for resolution. Because the population of trouble tickets had increased with each year of the DS program, proportionate allocation enabled sampling a fraction of tickets from each year that were representative of the entire population. Every closed trouble ticket in the DS CRM system automatically generates a customer satisfaction survey sent to the originating customer via e-mail. The completed satisfaction surveys return to the customer support centers via e-mail. Based on previous research in customer support, analyzing customer satisfaction is an appropriate measure 83

of service level (Berman, 2005; Berry, 1991; Khatibi et al., 2002). The dependent variable chosen to determine level of service was the overall customer satisfaction. A customer satisfaction survey is associated with each closed trouble ticket, but due to response rates, satisfaction surveys were not available for every trouble ticket. A sufficient population, however, was available for valid and reliable analysis. This particular variable was chosen because its five possible values, from strongly agree to strongly disagree, most clearly demonstrated if level of service was stable or improving. Hypothesis 1 Analysis Tests of hypothesis 1 compared customer support method in trouble ticket resolution, converting the variables to numeric values in PASW and testing the two variables. Coding of the independent variable, the program year, used the four digit year as the value. The dependent variable, resolution method, had three possible values. Codes for the dependent variable were on site = 1, transition = 2, and distance support = 3. Because the test compared three categorical variables lacking in mean averages, a nonparametric procedure, the chi-square test, was suitable, with a suggested .05 level of significance (Pyrczak, 1995). A chi-square test of independence was appropriate because the dependent variable was a nominal data value and the grouping of the independent variable was categorical. The variables used to analyze the first hypothesis were the year since inception of the program and the resolution method used when closing the customer support issue; specifically, distance support, on-site, or transition. Both variables were categorical, for purposes of the hypothesis. While the independent variable, the trouble ticket closing date, could also be considered numerically as a scale variable by using exact values for 84

the date closed, it was more informative to categorize the trouble ticket closing dates into a relatively small number of year groups. The null hypothesis tested for differences between the years. Non-parametric tests were the only tests applicable with categorical data (Cooper & Schindler, 2008). Chi-square is one of the most widely used statistical tests for categorical data and it applies to a wide range of issues and problems using frequency data (Pyrczak, 1995). One of the key requirements of the chi-square test is that the data categories are independent and mutually exclusive. In the trouble ticket, resolution method data are nominal data, in that a particular resolution method may include three subpopulations; specifically, distance support, on-site support, or transition. Resolution method data are also independent and mutually exclusive. For example, if 20% of trouble tickets in a given year had resolution using on-site support, and 15% had resolution by transition, then the remaining 65% had resolution using DS methods. The chi-square test was appropriate for analysis by observing remote support rates during one year and determining if they were significantly different from the rates expected (Lohr, 1998). Derivation of the chi-square value is from the sum of the observed values minus the expected values squared, divided by the expected value. If the observed values are equal to the expected values, the chi square value is zero, indicating no difference between the observation and the expectation, based on the chi-square distribution probabilities (Pyrczak, 1995). One of the important assumptions of chi square is a sufficiently large sample (Lohr, 1998). Applying chi-square to small samples increases the risk of Type II errors to an unacceptable level. Sample size was not an issue with trouble ticket data, as the 85

population sizes from each year were very large, with most of the years having over 300,000 cases. Customer support data can become a chi-square statistic that is able to determine whether the average support data during the later period of the DS program is different from the early support data. The analysis results showed the significance of an increase or reduction in a specific support method. The analysis also demonstrated whether the increase or reduction was due to sampling error or chance variation rather than a real shift in resolution method. The null hypothesis was that the DS program intervention of KM and IT did not significantly affect yearly rates of remote trouble ticket assistance. Using percentages, calculation of the chi-square value determined if the remote support rates between the expected and observed data were significantly different. If the null hypothesis was true, the expected percentages for each of the categories should have been similar. Comparisons could not use counts because the number of cases for all samples was not the same. Hypothesis 2 Analysis Tests of hypothesis 2 compared program year to trouble ticket resolution time. The program year, the independent variable, required different treatment than in testing of hypothesis 1. The trouble ticket closing date was numeric as a scale variable in anticipation of using a stronger parametric test. Specifically, the program year used values for the year closed. The dependent variable, resolution time, was a scale variable defined in hours. The test to use when comparing an independent scale variable to a dependent scale variable is regression, which computes regression coefficients that characterize the strength of the relationship. The appropriate statistical test for a 86

significant linear relationship is the F test, also computed by regression. Linear regression analysis can quantify the strength of the relationship between an independent variable and a dependent variable, and determine if there is any relationship between the two. The dependent variable, however, did not meet the assumptions of a normal distribution, so non-parametric tests were necessary to test the null hypothesis. For this hypothesis, the Kruskal-Wallis test was appropriate to test multiple independent samples. The Kruskal-Wallis test is a one-way analysis of variance (ANOVA) by ranks. It tests the null hypothesis that multiple independent samples come from the same population. Unlike a standard ANOVA, it does not assume normality, and researchers can use it to test ordinal variables. To determine if the null hypothesis that all population means are equal, the data must consist of independent samples from populations with the same shape (Norusis, 2009). This is a less stringent assumption than having to assume that the data are from normal populations, although the assumption of equal variances remains to ensure validity of the test. Hypothesis 3 Analysis Testing in hypothesis 3 compared customer satisfaction over the life of the program. The null hypothesis was that there is no relationship between the age of the DS program and customer satisfaction. The variables underwent conversion to numeric values. The independent variable was program year, categorized as it was in tests of hypothesis 1. The dependent variable, customer satisfaction, had five possible values arranged on a Likert-type scale: The variable values were strongly agree, agree, neutral,

87

disagree, or strongly disagree. These value codes in PASW were from 1 to 5, with strongly disagree = 1 to strongly agree = 5. The distribution of the customer satisfaction surveys underwent testing for normality of the distribution. Because the dependent variable was ordinal, however, the mean was not the optimal estimate of centrality because the distances between the values were arbitrary. A nonparametric procedure designed to test for the significance of the difference between multiple groups was more appropriate. The tests were nonparametric because they made no assumptions about the parameters, such as the mean and variance of a distribution, and the tests did not assume the use of any particular distribution (Norusis, 2009). Testing of hypothesis 3 again used a nonparametric test for multiple independent samples. The Kruskal-Wallis test was suitable to determine if the means of the satisfaction scores were significantly different among the years of the study. Hypothesis 4 Analysis The test for hypothesis 4 examined the relationship between the resolution method and resolution time. The resolution method was a nominal independent variable and the dependent variable, resolution time, was a scale variable. Because the test compared a categorical variable with an assumed normally distributed variable with mean averages, a parametric procedure, the one-way ANOVA test, would have been appropriate (Corder & Foreman, 2009). The independent variable was categorical with more than two categories, specifically, distance support, on site, or transition. A one-way ANOVA would have been appropriate if the population of the dependent variable, the resolution time, had a 88

normal shape (Gardner & Altman, 1989). The purpose of ANOVA is to determine whether the comparison groups differ significantly among themselves on the variables studied (Corder & Foreman, 2009). The assumptions of the standard ANOVA, however, were invalid, so nonparametric procedures were more appropriate to test for the significance of the difference between the groups. As a non-parametric alternative to the one-way ANOVA, tests used were the Kruskal-Wallis test and Spearmans correlation coefficient. No assumptions about the mean and variance were necessary for the tests, and the tests did not assume the dependent variable had any particular distribution. Hypothesis 5 Analysis Testing of hypothesis 5 determined the significance of the relationship between the resolution method and customer satisfaction. The resolution method was a nominal independent variable and customer satisfaction was an ordinal dependent variable. Coding followed previous descriptions. Because the independent variable had more than two categories and the dependent variable is ordinal, testing included the Kruskal-Wallis test and the Spearman correlation coefficient to determine if satisfaction scores differed by the method of resolution. Hypothesis 6 Analysis Hypothesis 6 underwent testing to compare trouble ticket resolution time to customer satisfaction. For this test, the eight categories of resolution time were lengths that had an approximately equal number of trouble tickets in each category. Categorizing the trouble ticket resolution times into a relatively small number of time periods was more informative and useful for testing purposes. The eight categories of resolution time 89

were approximately 1= 0.8 hours or less, 2 = 0.8 to 3.5 hours, 3 = 3.5 to 25 hours, 4 = 25 to 117 hours, 5 = 117 to 240 hours, 6 = 240 to 356 hours, 7 = 356 to 716 hours, and 8 = 716 hours or more. The resolution time underwent transformation into a categorical independent variable and customer satisfaction remained an ordinal dependent variable. Coding of customer satisfaction survey scores within PASW statistical software took place as previously described.

Validity and Reliability Validity is the best available approximation to the truth of a given proposition, inference, or conclusion (Trochim, 2006, p. 2). The validity of the data and findings in naturalistic inquiry depends on issues such as trustworthiness, authenticity, prolonged engagement, and continued and correct observation of the research. Validity has four basic types: external validity, internal validity, construct validity, and conclusion validity (Trochim, 2006). External validity means generalizing findings to various events, settings, and times. In the context of this study, the sample from the population of trouble tickets had to be large enough to prevent error. With a small sample, external validity can be an issue. For example, analyzing trouble tickets from only three projects might provide results indicating that on-site technical representatives were still solving most of the customer support calls. These results, however, would be inconclusive because hundreds of projects were unexamined. The best way to improve the external validity in hypothesis testing is to increase the size of the study sample to minimize the error in estimating population parameters (Lohr, 1998).

90

Internal validity is the ability of the research to measure what it is supposed to measure (Pyrczak, 1995). A threat to internal validity in the context of this study was the ability to measure whether the trouble tickets had the option of using either organic support or remote support. If the study included trouble tickets for a problem that could find resolution only using organic support, for example, then the measure would not be a valid measure of the remote support option. A projects trouble ticket resolution method must have both options available for the determination to be valid. Another limitation to the validity in the context of this study was the assumption that all problem resolutions using DS had trouble tickets. Analyzing trouble tickets is not a valid measure if technicians did not complete trouble tickets for certain projects in the study, or did not enter the ticket in the customer support database. Construct validity is the degree to which inferences can be made from the study to the theoretical constructs on which the study is based (Trochim, 2006, p. 4). Like external validity, construct validity deals with generalizations of the research. In the context of the current study, a threat to construct validity would be an improper theoretical view of the evidence collected. For example, construct validity is threatened if the study results cannot infer that increasing use of remote support methods and steady overall satisfaction demonstrate effectiveness of the DS program. Conclusion validity, however, means conclusions about relationships in the research are reasonable (Trochim, 2006). In the context of this study, a threat to conclusion validity would be a Navy mandate that prohibited use of local or organic support, which would naturally increase the use of remote support methods. Research may show an increase in the use of remote support, but a conclusion about relationships of the DS program and its effectiveness 91

would not be not reasonable. Since the Navy cannot mandate DS as the only means to repair a problem, conclusion validity is not threatened. A measure is reliable if it generates consistent results (Trochim, 2006). Reliability is necessary, but not sufficient, for validity because even measures with high reliability may not be valid in measuring the construct of importance (Hair, Anderson, Tatum, & Black, 2006). If reliable indicators measure the same construct, than a limitation to reliability in this study would be the ability to measure customer support trouble tickets reflecting the correct source of support, either organic/local support or remote support from a centralized call center. The ability to measure this information correctly depended on whether the CSR completed the trouble ticket accurately. If the CSRs only sometimes completed the trouble tickets correctly, the measure would not be reliable. Too many errors in completing the ticket would cast doubts on the ability to measure the variable in the same way consistently. Limitations of the reliability would also include the many variables that may affect the source of customer support not considered in the study. Such variables include individual differences of the location, the availability of the technician, the nature of the problem, and the different needs of the sailor and Marine Corps customer. The study assured validity and reliability by reviewing the database for extraneous cases. Only cases within the years of study underwent analysis. For the resolution method variable, which related to either local support, dispatched support, or remote support from a centralized call center, data review for possible errors included examining trouble tickets in detail on a frequent, random basis before creating the stratified, random sample for each year. Customer satisfaction surveys underwent similar 92

examination for errors and content prior to analysis. Only cases with the overall satisfaction rating completed in the survey became part of the data. Bias The study of the U.S. Navy's DS program was within the researchers organization. Conducting research within ones own organization can create many issues. One issue is a risk of not being able to release the research. For example, studying the effectiveness of the U.S. Navys DS program might not have yielded positive results or other results the U.S. Navy would embrace and wish to advertise. The DoN could have created obstacles to publishing the study to maintain a positive public profile. Another issue when conducting the research is significant bias. For example, if the research were at risk of a publication ban due to unfavorable results, the researcher night have changed the results. The risk of rejection could have compelled the researcher to act unethically by skewing the results into a favorable outcome to ensure release of the study. The Academy of Management code of ethics states it is the duty of members to minimize the possibility that results are misleading and, when possible, to consult with experts or authoritative bodies on the ethics of research if a practice is unclear. The code of ethics contains statements for academicians, researchers, and managers and it provides guidance on issues such people might encounter in professional work (Academy of Management, n.d.).

93

Ethical Considerations Ethical considerations of this study are few, because the research involved data collection from a customer support database that did not involve human subjects or the use of personally identifiable information. According to Creswell (2009), the principles of research ethics are respect for persons, respect for beneficence, and respect for justice. In terms of the respect for persons, the study did not involve participants, so no concern existed regarding participants acting on their own free will. Creswell (2009) also stated researchers must inform participants of the intent of the study, the researcher never places them at risk, and the researcher does not marginalize or disempower them. In this study, authorities in the U.S. Navy granted permission, and the only data used concerned military combat systems. The potential risks to a researcher conducting a study of the Navys DS program can be significant. Ethical considerations include severe damage to a reputation that affects a career or damaging the good standing of an organization. A more serious ethical consideration for a researcher when dealing with the data of this study was compromising classified operational capabilities. Paramount consideration was to the goal in protecting the data and using only the information needed to test the hypotheses of the study. The researcher also might disclose to U.S. Navy colleagues personal biases, prejudices, technical shortcomings, or other constraints that could hinder the researchers ability to discharge professional responsibilities in the future. The potential benefits, however, outweighed the risks sufficiently to justify conducting the study of the DS program, because the researcher was able to conduct the research in a timely, respectful, and 94

conscientious way that supported the role of the U.S. Navy in the research. Also, the researcher could justify the study by advising the U.S. Navy in a timely way about real or potential adverse outcomes or adverse situations discovered in the research that could hamper the Navys ability to discharge responsibilities to technical support personnel and other stakeholders. Damage to national security is a concern, because many of the systems studied were classified computer systems. While the research required access to sensitive and proprietary information that included ships repair and maintenance data, all data used were unclassified trouble ticket information that did not reveal current operational warfighting capabilities. Unauthorized disclosure, use, or negligent handling of the information could have caused irreparable injury to the U.S. government or to a contractor of the government. The injury could be source-sensitive procurement information of the U.S. government or proprietary information of a Navy, military unit, or government contractor. The study did not divulge the names of any customers in the customer support database. The prior written approval of the designated Naval Sea Logistics Center representative allowed the disclosure of all data. In terms of beneficence, the research must be meaningful to both the researcher and others; reciprocity must be present wherein both the participants and the researcher benefit from the study, and the research must ensure the protection of the participants (Creswell, 2009). Research justice involves questions as to the ownership, falsification, suppression, and credibility of the data. The researcher verified the data as accurately depicted in the customer support trouble tickets. The study has tremendous opportunity to benefit the support methods of military combat systems. Because the study showed an 95

increase in the use of distance support methods, the study validated the Navys continued efforts. The researcher is in the employ of the Department of the Navy, which is releasing the data for the study. Research within ones own organization presents ethical considerations, such as any potential conflict of interest that may compromise the integrity of the study. Because the data were not in the public domain, ethical considerations included the possibility that the data or data analysis may have been modified, skewed, or interpreted differently to obtain favor or release from the employer. In this case, the DoN was in a position to exploit the researcher in some way for organizational benefit. During the study, the researcher had open access to all applicable data, and the DoN in no way attempted to modify, falsify, or otherwise distort the data. To stress the appearance of objectivity and research justice for this study, the researcher did not serve as an officer, director, trustee, or partner for the program under study. The researcher also did not receive any direct financial benefit in relation to the research or any financial benefit from the DoN or any other organization as a result of the study or its findings.

96

CHAPTER 4. RESULTS

Introduction The goal of this study was to assess the intervention of KM and IT on competitive advantage within a military customer support program. In doing so, the study processes analyzed the theory that knowledge transfer is in a hierarchal relationship with KM and IT in the customer support call center environment. The research investigated the U.S. Navys DS program and had it origins in the Chief of Naval Operations direction to more efficiently maintain the combat systems on ships and aircraft while maintaining or improving the level of service (U.S. Fleet Forces Command, 2003, p. 2). The study procedures therefore used variables to measure efficiency and level of service. Using a quantitative analysis, the procedures targeted variables that could best numerically measure efficiency and level of service over an eight-year period. To study the variables, the process used quantitative research methodology to analyze a customer support database of over 2.8 million trouble tickets and 3,506 customer satisfaction surveys recorded from 2002 through 2009. The research questions and hypotheses used to support the study provided the basis necessary to conduct exploratory research, acquire the necessary data, and perform an analysis on the data from a sample population (Cooper & Schindler, 2008). The statistical tests in the study required a 95% confidence level, or p < .05 level of significance, to reject the null 97

hypothesis. Most often, the tests revealed significance at the 0.01 level, an outcome noted in the analysis. A summary of the supported and non-supported hypotheses is in Table 5.

Table 5 Support for Alternative Hypotheses


Hypothesis H1a: There is a relationship between the program year and resolution method H2a: There is a relationship between the program year and trouble ticket resolution time. H3a: There is a relationship between the program year and customer satisfaction. H4a: There is a relationship between the resolution method and resolution time. H5a: There is a relationship between the resolution method and customer satisfaction. H6a: There is a relationship between the trouble ticket resolution time and customer satisfaction. Supported X** X** X** X** X** X* Not Supported

* Significant at the 0.05 level (2-tailed). ** Significant at the 0.01 level (2-tailed). The entire population of 2,874,097 trouble ticket cases over an eight-year period was available for examination by exporting specific data fields from a U.S. Navy server database into 73 MS Excel spreadsheets. Importing the 73 Excel spreadsheets into an individual PASW statistical software file for each year of the study created eight PASW files. The distribution of cases by year of the entire database of customer support trouble tickets appears in Figure 3. Selection of a random sample of cases from each years file, using PASW functionality for a 95% confidence level and 1% confidence interval, provided the data for the study. Merging the eight random sample PASW files into one PASW file created a single random sample PASW file of over 72,000 cases, stratified by year. A simple 98

random sample was suitable due to computer processing limitations expected from 2.8 million trouble ticket cases in one file. The final 72,610 valid trouble ticket cases provided a 95% confidence level with a 0.36 confidence interval for a population of 2,874,024 cases.

800000

700000

600000

500000

400000

300000

200000

100000

0 2002 2003 2004 2005 2006 2007 2008 2009

Figure 3. Trouble ticket frequency distribution by year Because the entire population was available, a test to determine sampling error took place. Comparison of the percentage breakdown of the resolution method between the population and the random sample for each of the eight years produced the results shown in Table 6. For the resolution method variable, the results show sampling error between the population and the random sample as well within the confidence interval.

99

Table 6 Resolution Method Frequency Distribution By Year for the Population and Random Sample
Year Population (N = 2874024) On Site% Transition% DS% Sample (n = 72610) On Site% Transition% DS% 2002 36105 1.2 0.3 98.3 7585 1.2 0.4 98.2 2003 98579 5.5 9.4 84.7 8751 5.5 9.8 84.2 2004 230488 2.2 4.2 93.0 9220 2.3 4.1 93.0 2005 303577 0.4 1.1 90.8 9310 0.4 0.8 91.2 2006 515456 0.1 0.1 99.8 9428 0.1 0.1 99.8 2007 483410 0.2 0.1 99.7 9417 0.2 0.1 99.7 2008 505776 0.3 0.1 99.6 9425 0.3 0.1 99.6 2009 700633 0.4 0.1 99.5 9474 0.3 0.1 99.6

The customer satisfaction survey database consisted of thousands of completed satisfaction surveys from the military customers of the Navy's regional combat system help desks. The survey population over an eight-year period became available by exporting specific data fields from U.S. Navy servers at different locations into MS Excel spreadsheets. These Excel spreadsheets underwent first import into an individual PASW file from each location, and then merging into one PASW file. No sampling was necessary. The survey responses included 5-point Likert-type scale scores from a set of standardized questions about help desk performance. Cross-referencing the surveys to the trouble tickets using the survey's trouble ticket reference number helped to create usable data. Using the trouble ticket reference number, merging the customer satisfaction survey database PASW file with the trouble ticket database PASW file related each customer satisfaction survey with the trouble ticket to which it referred.

100

Valid Trouble Ticket Cases Not all customer support trouble tickets met validity requirements for the study. In the context of this study, trouble ticket cases must have had the option of using either organic support or remote support to maintain internal validity, especially for tests involving the resolution method variable. Valid cases, as previously discussed in chapter 3, were the trouble tickets with a resolution method having all three options available: onsite (resolved at the site by a field technician), transition (technician travels to the site), or distance support (resolved remotely from a help desk or call center). For example, if the study included trouble tickets for a problem that could be resolved using only on-site technical support, the measure was not a valid measure of resolution method and was removed from analysis. The data for cases that did not have all three resolution options available underwent examination. The subject identification in the trouble ticket was a clear method to determine if all three resolution options were available. All three options were available if the problem subject identified the problem as one that could reach resolution by remote means, by sending someone to the site, or by an on-site field technician. Analysis determined that trouble tickets with the problem subject identification of directory assistance could reach resolution only by using DS. Directory assistance tickets were inquiries to find a specific telephone number or address of personnel or a unit. Because these tickets did not involve traditional troubleshooting or repair of a system, they also unrealistically skewed the resolution times of a customer support program much lower, due to the quick nature of resolving the problem. This category included 2,610 cases, and analyses excluded all of them. 101

Trouble tickets remaining unresolved for an extended period also presented a threat to internal validity, especially to tests involving the resolution time variable. These cases were typically trouble tickets the operators had disregarded, but then discovered and closed after a long period of time had elapsed. These cases were outliers, and they could have skewed the results of the study by unrealistically increasing mean resolution times. In this situation, the method of resolution recorded (or defaulted) was nearly always using DS. These cases were not eligible as valid cases, since all three resolution methods were not available. Cases with trouble ticket resolution times over a year in length were extremely long-term trouble ticket cases. This category included 277 cases, and analyses excluded all of them. After exclusion of all invalid cases, 69,725 cases remained for analysis of the six hypotheses. Missing values also presented a threat to validity. Missing data may reduce the precision of calculated statistics because there is less information than originally planned. The assumptions behind many statistical procedures use a foundation of complete cases, and missing values can complicate the theory required. Investigation revealed that missing values occurred on a random basis, with no discernable pattern. The statistical software handled missing data analysis by analysis. When an analyzed value was missing, the software excluded the case from the test and continued the analysis. Because of the large sample, excluded cases presented little change to the confidence interval of the statistical tests.

102

Data Analysis and Results The following sections report the data analysis methods and results. After showing the descriptive data, presentation and analyses of the data related to the testing of each hypothesis are in order. The methodology used in the analyses of the correlation of the variables tested appears in Table 7. Non-parametric procedures were appropriate in this study because the assumption of normality of the distribution was not satisfied for any of the three dependent variables.

Table 7. Relationship of the Research Questions, Variables, and Statistical Methods Used to Determine Statistical Significance of Correlation
Hypothesis H10 H20 H30 H40 H50 H60 Independent Variable Program Year (Nominal) Program Year (Scale) Program Year (Ordinal) Resolution Method (Nominal) Resolution Method (Nominal) Resolution Time (Ordinal) Dependent Variable Resolution Method (Nominal) Resolution Time (Scale) Customer satisfaction (Ordinal) Resolution Time (Scale) Customer satisfaction (Ordinal) Customer satisfaction (Ordinal) Statistical Test Chi Square, Lambda, Goodman, Phi, Cramer V Kruskal-Wallis, Spearmans Rho Kruskal-Wallis, Somers d Spearmans Rho Kruskal-Wallis, Spearmans Rho Kruskal-Wallis, Spearmans Rho Kruskal-Wallis, Somers' d Spearmans Rho Rejection Criteria p < .05 p < .05 p < .05 p < .05 p < .05 p < .05

Program Year vs. Resolution Method The first hypothesis tested the relationship between the DS program year and resolution method. The hypotheses follow: H10: There is no significant relationship between the program year and resolution method (distant/transition/local). 103

H1a: There is a relationship between the program year and resolution method. Figure 4 and Table 8 depict the frequency of the dependent variable, resolution method, created from the random sample of all eight years of the study. As shown, 96% of the trouble tickets were resolved using DS from a remote help desk. Increasing the percentage of trouble tickets resolved using distance support methods is a major aim of the DS program.

Figure 4. Resolution Method Frequency Distribution Table 8 Resolution Method Frequency Table
f Valid OnSite Transition DS Total Missing Total 897 1374 66930 69201 524 69725 % 1.3 2.0 96.0 99.2 .8 100.0 Valid % 1.3 2.0 96.7 100.0 Cumulative % 1.3 3.3 100.0

104

Analysis of the categorical data in the first research question took place with data tables. A crosstabulation table presents categorical data by counting the number of observations that fall into each group for two variables, one divided into rows and the other divided into columns. For statistical inference, the tables provide a foundation for statistical tests that question the relationship between the variables based on the observed data. Crosstabulation tables are suitable when analysis of categorical data concerns more than one variable. The purpose of crosstabulation tables is to display the relationship between two or more categorical (nominal or ordinal) variables. The number of distinct values for each variable determines the size of the table, with each cell in the table representing a unique combination of values. The customer support database of trouble tickets from the years 2002 through 2009 had groups determined by the resolution method used to close the call. The resolution method coding follows: on-site resolution = 1, transition = 2, and distance support (DS) resolution = 3. A crosstabulation table depicting the tickets by year and by resolution method is in Table 9. To investigate possible differences among the resolution method by year, analysis included computing the column percentages for each resolution method. From this table, column percentages show a variation in resolution method across the eight years.

105

Table 9 Program Year vs. Resolution Method Frequency Distribution


Program Year 2002 DS Count Year % Count 7217 98.5% 0027 . 4 % 0086 1.2% 7330 100.0% Expected 7 0 8 9 Trans 2003 7107 8165 8 4.2% 0853 0167 1 0.1% 0482 0109 5 .7% 8442 8442 100.0% 2004 8447 8741 93.5% 0380 0179 4 .2% 0211 0117 2 .3% 9038 9038 1 00.0% 2005 8017 7869 98.5% 0079 0161 1 .0% 0040 0105 . 5 % 8136 8136 100.0% 2006 8969 8692 99.8% 0012 0178 . 1 % 0006 0116 . 1 % 8987 8987 100.0% 2007 8991 8722 99.7% 0009 0179 . 1 % 0018 0116 . 2 % 9018 9018 100.0% 2008 8997 8739 99.6% 0011 0179 . 1 % 0028 0117 . 3 % 9036 9036 100.0% 2009 9185 8911 99.7% 0003 0182 . 0 % 0026 0119 . 3 % 9214 9214 100.0%

Expected 0 1 4 5 Year % OnSite Count Year % Count Year %

Expected 0 0 9 5 Total

Expected 7 3 3 0

Note. Expected counts truncated to the nearest whole integer. Percentages do not equal 100 due to rounding.

The chi-square test provides a quantitative measure of the relationship of categorical variables. It tests the association between the row and column variables in a two-way table. First, the test determines what the distribution of observations (frequencies) would look like if no relationship existed, and then it quantifies the extent to which the observed distribution differs from that determined in the first step. The null hypothesis H0 assumes there is no association between the variables, or that one variable does not vary according to the other variable. The presumption is that no relationship between variables exists and that any relationship found may have been purely by chance. The null hypothesis states that any observed pattern is due solely to chance and, hence, no relationship exists. The null hypothesis (that is, that no relationship exists) is the assumption, and an objective of statistical testing is to examine whether testing can reject the null hypothesis. 106

The foundation of the chi-square test is a test statistic that measures the divergence of the observed data from the values expected under the null hypothesis of no association. This statistic requires calculation of the expected values based on the data. The expected value for each cell in a two-way table is equal to (row total*column total)/n, where n is the total number of observations included in the table. The p-value, or significance value, for the chi-square test is p(X2 >X), which is the probability of observing a value at least as extreme as the test statistic for a chi-square distribution with (r-1)(c-1) degrees of freedom. The lower the p-value, the less likely it is that the two variables are independent or unrelated. A p-value larger than 0.05 would indicate there is no association between the resolution method and the year of the help desk trouble ticket. This would mean the differences between observed and expected values are negligible, and the result would be a failure to reject the null hypothesis. The correlation from the test is significant to the .01 level (two-tailed), so the null hypothesis is rejected. The conclusion from Table 10 is that an association is present between program year and resolution method.

Table 10 Program Year vs. Resolution Method Chi-Square Test Results


Value Pearson Chi-Square Likelihood Ratio Linear-by-Linear Association N of Valid Cases 5716.918
a

df 14 14 1

Asymp. Sig. (2-sided) .000 .000 .000

4449.097 1690.307 69201

a. 0 cells (.0%) have expected count less than 5. The minimum expected count is 95.01.

The lambda statistic is the proportion by which the error is reduced in predicting the dependent variable when using the independent variable (Norusis, 2009). Table 11 107

shows the directional measures for program year vs. resolution method. The lambda value of 0 means that the independent variable, program year, does not assist in predicting the dependent variable, resolution method. Symmetric lambda, however, shows a significant measure of association. The symmetric lambda is based on predicting the row and column variables in turn (Norusis, 2009). Also, Goodman and Kruskal tau, a modification of lambda, shows a significant but weak association between program year and resolution method. Directional measures are in Table 11.

Table 11 Program Year vs. Resolution Method Directional Measures


Value Nominal by Nominal Goodman and Kruskal tau Uncertainty Coefficient Lambda Symmetric Resolution Method Dependent Resolution Method Dependent Symmetric Resolution Method Dependent a. Not assuming the null hypothesis. b. Using the asymptotic standard error assuming the null hypothesis. c. Cannot be computed because the asymptotic standard error equals zero. d. Based on chi-square approximation e. Likelihood ratio chi-square probability. .029 .193 .001 .004 33.492 33.492 .000 .000
e e

Asymp.Std. Error
a

Approx. T
b

Approx. Sig. .000 .


c

.021 .000

.001 .000

35.686 .
c

.063

.002

.000

The next step was to calculate symmetric measures of correlation to determine the strength of the relationship between program year and resolution method. Table 12 shows the results. Values for phi, Cramers V, and the contingency coefficient show a 108

positive, but weak, correlation between program year and resolution method. Chi-squarebased measures of the two variables are in Table 12.

Table 12 Program Year vs. Resolution Method Symmetric Measures


Value Nominal by Nominal (N = 69201) Phi Cramer's V Contingency Coefficient .287 .203 .276 Approx. Sig. .000 .000 .000

Program Year vs. Resolution Time Hypothesis 2 test compared the DS program year to trouble ticket resolution time. The hypotheses are H20: There is no significant relationship between the program year and trouble ticket resolution time. H2 a: There is a relationship between the program year and trouble ticket resolution time. The program year, the independent variable, was the trouble ticket closing date considered numerically as a scale variable by using values for the year closed. The dependent variable, resolution time, was a scale variable defined in hours. A histogram of the dependent variable is in Figure 5, with an overlay of the normal probability plot. The distribution of times differed significantly from a normal probability distribution, with times skewed strongly to the right. Table 13 shows the overall description of resolution times, including mean, median, and mode.

109

Figure 5. Resolution time frequency distribution Table 13 Resolution Time Descriptive Data
Resolution Time N M SE M Mdn Mode SD Variance Range Minimum Maximum Valid Missing 67042 2683 232.641 2.645 .095 .000 684.745 468875.407 8760.000 .000 8760.000

Figure 6 shows the mean resolution time in hours for the eight-year period of the study. The mean resolution times of the study are in Table 14. 110

Figure 6. Program year vs. mean resolution time Table 14 Program Year vs. Mean Resolution Time Detail
Resolution Time Program Year 2002 2003 2004 2005 2006 2007 2008 2009 Total N 6998 8351 8968 8424 8913 8969 8988 7431 67042 M 536.507 448.733 217.749 192.333 118.939 123.169 139.624 148.314 232.641 SD 1163.041 944.486 662.730 629.508 399.432 437.250 455.147 336.448 684.745

Regression testing is the method usually chosen to compare an independent scale variable to a dependent scale variable, with an F-test to determine a significant linear relationship. Regression analysis, however, assumes a normal distribution of the 111

dependent variable and independence of the independent variable. Figure 7 plots the observed value of resolution time and the value expected if the data are from a normal distribution. The points should cluster around a straight line if the data are from a normal distribution. The normal probability plot of the difference variable diverges significantly from the straight line, meaning the assumption of normality is unreasonable.

Figure 7. Resolution time normal probability plot Because the sample was large, the Kolmogorov-Smirnov Test procedure was an appropriate test to use to compare the observed cumulative distribution function for resolution time with a specified theoretical normal distribution. This test compares the set of scores to a normally distributed set of scores with the same mean and standard deviation (Field, 2000). Computation of the one-sample Kolmogorov-Smirnov Z is from 112

the largest difference in absolute value between the observed and theoretical cumulative normal distribution functions. This goodness-of-fit test determines if the observations could reasonably have come from the normal distribution. If the test is non-significant, the assumption is that the distribution is normal. If the test is significant, then the distribution is not normal. Results of the test are in Table 15.

Table 15 Resolution Time Normal Distribution Test


Resolution Time N Normal Parameters
a,b

67042 M SD Absolute Positive Negative 232.641 684.745 .367 .279 -.367 95.031 .000

Most Extreme Differences

Kolmogorov-Smirnov Z Asymp. Sig. (2-tailed) a. Test distribution is normal. b. Calculated from data.

The Kolmogorov-Smirnov Z statistic had a value of 95.0, and the probability of the Z statistic was below 0.05. The significant result meant the normal distribution with a parameter of 232.6 was not a good fit for the resolution time within the eight years of the sample. Because the dependent variable did not meet the assumptions of a normal distribution, a non-parametric test was suitable to compare program year to resolution time. To test the null hypothesis, nonparametric tests for multiple independent samples were appropriate: the Kruskal-Wallis test and the Spearman correlation coefficient. First, the Kruskal-Wallis test is a one-way analysis of variance (ANOVA) by ranks. It tests the 113

null hypothesis that multiple independent samples come from the same population. Unlike standard ANOVA, it does not assume normality, and its use is appropriate to test ordinal variables. To determine if the null hypothesis that all population means are equal is true, the data must be independent samples from populations with the same shape (Norusis, 2009). This is a less stringent assumption than having to assume that the data are from normal populations. The assumption of equal variances, however, remains. The first step of the test is to rank all the data from all the groups together. This step is used to test the equality of population medians among the groups, which in this case is the program year. The ranks replace the data used in the standard ANOVA. Results are in Table 16.

Table 16 Program Year vs. Resolution Time Mean Rank


Program Year 2002 2003 2004 2005 2006 2007 2008 2009 Total N 6998 8351 8968 8424 8913 8969 8988 7431 67042 Mean Rank 47167.04 43226.36 31707.97 31584.47 27502.14 28107.34 29503.97 32763.09

The Kruskal-Wallis test resulted in a chi-square test statistic of 8056 and a significance level less than .01. Results of the test are in Table 17. Based on the result, the null hypothesis that there is no relationship between the program year and resolution time was rejected. 114

Table 17 Program Year vs. Resolution Time Kruskal-Wallis Test Results


Test Statistics Chi-square Df Asymp. Sig.
a,

Resolution Time 8056.854 7 .000

a. Grouping Variable: Program Year

Because the null hypothesis was rejected, a test to determine the degree of association between the two variables was necessary. Spearmans rho correlation coefficient is a measure correlation for non-parametric variables. As shown in Table 18, program year had a weak negative correlation with resolution time, meaning as the DS program year had increased, the trouble ticket resolution times had decreased. The correlation was significant to the .01 level (two-tailed).

Table 18 Program Year vs. Resolution Time Correlation Test Results


Resolution Time Program Year Correlation Coefficient Sig. (2-tailed) N
**. Correlation is significant at the 0.01 level (2-tailed).

-.252 .000

**

67042

Program Year vs. Customer Satisfaction Hypothesis 3 was tested to compare customer satisfaction over the life of the program. The hypotheses are H30: There is no relationship between the program year and customer satisfaction. H3a: There is a relationship between the program year and customer satisfaction. 115

A histogram of the dependent variable is in Figure 8, with an overlay of the normal probability plot. The distribution of customer satisfaction scores differs significantly from a normal probability distribution, with the plot skewed strongly to the left.

Figure 8. Customer satisfaction survey score frequency distribution For more detail of the scores, a customer satisfaction score frequency table is in Table 19. Analysis of the customer satisfaction surveys functioned to determine the values of common measurements of the distributions central tendencies. Results are in Table 20.

116

Table 19 Customer Satisfaction Score Frequency Table

f Valid 1 2 3 4 5 Total Missing Total 95 84 360 1309 1451 3299 206 3505

% 2.7 2.4 10.3 37.3 41.4 94.1 5.9 100.0

Valid % 2.9 2.5 10.9 39.7 44.0 100.0

Cumulative % 2.9 5.4 16.3 56.0 100.0

Table 20 Customer Satisfaction Survey Score Central Tendencies


N M Mdn Mode Variance Skewness SE of Skewness Kurtosis SE of Kurtosis Range Valid Missing 3299 206 4.19 4.00 5 .873 -1.429 .043 2.248 .085 4

The null hypothesis suggested there is no relationship between the age of the DS program and customer service level. For this hypothesis, the independent variable was program year and the dependent variable was customer satisfaction. Figure 9 shows the trend of the mean of overall customer satisfaction scores for each year of the program.

117

Figure 9. Program year vs. customer satisfaction mean scores To better determine if the distribution of the dependent variable was normal, a normal probability plot appears in Figure 10. The lower customer satisfaction scores diverge sharply from the straight line, making the assumption of normality suspect. Parametric tests were desirable, but were not appropriate because the dependent variable was not normally distributed. A one-sample Kolmogorov-Smirnov test tests the null hypothesis that the data are a sample from a normal distribution. Results of the test are in Table 21. Because the observed significance levels for the tests were less than .01, the assumption of normality is not reasonable.

118

Figure 10. Customer satisfaction normal probability plot

Table 21 Customer Satisfaction Survey Score Normal Distribution Test


N Normal Parameters Most Extreme Differences Kolmogorov-Smirnov Z Asymp. Sig. (2-tailed) M SD Absolute Positive Negative 3299 4.19 .934 .255 .194 -.255 14.625 .000

Based on the test results, the assumption was that the distributions of the customer satisfaction surveys were non-normal. Also, because the dependent variable, customer satisfaction, was ordinal, the mean was not the optimal estimate of centrality because the distances between the values were arbitrary. A nonparametric procedure designed to test 119

for the significance of the difference between multiple groups was more appropriate. To test the null hypothesis, nonparametric tests for multiple independent samples were suitable: the Kruskal-Wallis test and the Spearman correlation coefficient. The KruskalWallis ranks are in Table 22.

Table 22 Program Year vs. Customer Satisfaction Survey Score Mean Rank
Program Year Customer Satisfaction 2004 2005 2006 2007 2008 2009 Total N 104 82 508 836 523 1231 3284 Mean Rank 1722.48 1406.46 1005.13 1276.66 2075.04 1979.18

The Kruskal-Wallis test resulted in a chi-square test statistic of 732.6 and a significance level less than .01. Results of the test are in Table 23. The null hypothesis that no relationship is present between the program year and customer satisfaction was rejected. The 3,284 valid cases of customer satisfaction surveys in the test provided a 95% confidence level with a 1.71 confidence interval for a population of over 2.8 million trouble tickets. Table 23 Program Year vs. Customer Satisfaction Kruskal-Wallis Test Results
Test Statistics Chi-square Df Asymp. Sig. a. Grouping Variable: Program Year
a

Customer Satisfaction 732.607 5 .000

120

Because the null hypothesis was rejected, a test to determine the degree of association between the two variables was conducted. The SomersD test was appropriate with ordinal data to quantify the strength and the direction of the association when one of the variables was dependent and the other was independent (Norusis, 2009). The results of the test in Table 24 showed a moderate positive correlation between program year and customer satisfaction, with a significance value less than .01.

Table 24 Program Year vs. Customer Satisfaction SomersD Test Results


Value Ordinal by Ordinal Symmetric Customer Satisfaction Dependent a. Not assuming the null hypothesis. b. Using the asymptotic standard error assuming the null hypothesis. .341 .316 Asymp. S.E .014 .013
a

Approx. T

Approx. Sig. .000 .000

24.832 24.832

The Spearman correlation test is appropriate when one or both of the variables are not assumed to be normally distributed and interval, but are assumed to be ordinal, as in this case. Spearmans correlation coefficient measures the relationship of variables or rank orders. The results of the test are in Table 25. With a value of .395, the test also showed a moderate positive correlation between DS program year and customer satisfaction. Like the SomersD test results, the correlation was significant to the .01 level (2-tailed).

121

Table 25 Program Year vs. Customer Satisfaction Score Correlation Test Results
Customer Satisfaction Program Year Correlation Coefficient Sig. (2-tailed) N
**. Correlation is significant at the 0.01 level (2-tailed).

.395

**

.000 3297

Resolution Method vs. Resolution Time The test for hypothesis 4 examines the relationship between the resolution method and resolution time. The hypotheses are H40: There is no significant relationship between the resolution method and resolution time. H4a: There is a relationship between the resolution method and resolution time. The resolution method was a nominal independent variable and the dependent variable, resolution time, was a scale variable. Because the test compared a categorical variable with an assumed normally distributed variable with mean averages, a parametric procedure, the one-way ANOVA test, was appropriate (Corder & Foreman, 2009). The ANOVA statistical technique, however, required that the dependent variable be normally distributed (McLaughlin, 2009). As previously discussed, resolution time was not normally distributed, so any test that assumed a normal distribution was suspect. Because the assumptions of the standard ANOVA were invalid and suspect, nonparametric procedures designed to test for the significance of the difference between multiple groups were more appropriate. The tests were nonparametric because no assumptions about parameters, such as the mean and variance of the distribution, were made. Also, the tests did not assume the dependent variable had any particular 122

distribution. The Kruskal-Wallis test and Spearmans correlation coefficient are nonparametric tests used for multiple independent samples as an alternative to the oneway ANOVA. The mean ranks of the Kuskal-Wallis test are in Table 26.

Table 26 Resolution Method vs. Resolution Time Mean Rank


Resolution Method Resolution Time OnSite Transition DS Total N 884 1337 64327 66548 Mean Rank 46844.47 44999.35 32844.32

Like the F test in standard ANOVA, the Kruskal-Wallis test did not test how the groups differed, only if they were different in some way. The significance value of the chi-square statistic in Table 27 was less than .05, resulting in rejection of the null hypothesis that the mean resolution times were from similar populations.

Table 27 Resolution Method vs. Resolution Time Kruskal-Wallis Test Results


Test Statistics Chi-square Df Asymp. Sig.
a

Resolution Time 1016.721 2 .000

a. Grouping Variable: Resolution Method

Researchers determine the value in the Kruskal-Wallis test by squaring each group's distance from the average of all ranks, weighting by its sample size, summing

123

across groups, and multiplying by a constant. The values in the current test demonstrated that the resolution times differed significantly based on the problem resolution method. Because the null hypothesis was rejected, a test to determine the degree of association between the two variables was necessary. Calculating a Spearmans correlation coefficient determined correlation between the variables. The results of Spearmans correlation coefficient in Table 28 show significant non-parametric correlation with an asterisk. Resolution method vs. resolution time was significant, with a weak inverse correlation of -.123, showing that increased use of distance support resolution corresponded with a slight decrease in trouble ticket resolution times. Correlation was significant at the .01 level (2-tailed).

Table 28 Resolution Method vs. Resolution Time Correlation Test Results


Resolution Time Resolution Method Correlation Coefficient Sig. (2-tailed) N
**. Correlation is significant at the 0.01 level (2-tailed).

-.123

**

.000 66548

Resolution Method vs. Customer Satisfaction The test of hypothesis 5 determines if there is a significant relationship between the resolution method and customer satisfaction. The hypotheses are H50: There is no significant relationship between the resolution method and customer satisfaction. H5a: There is a relationship between the resolution method and customer satisfaction. 124

The resolution method was a nominal independent variable and customer satisfaction was an ordinal dependent variable. Coding followed previous descriptions. The independent variable had more than two categories and the dependent variable was ordinal, therefore, testing used the Kruskal-Wallis test to determine if satisfaction scores differed by the method of resolution. Normality tests determined that the distribution of customer satisfaction scores was non-normal, and any test that assumed normality was suspect. Also, because customer satisfaction scores were ordinal, the mean was not the optimal estimate of centrality. The tests used did not assume the dependent variable had any particular distribution. The Kruskal-Wallis test is a nonparametric test for multiple independent samples used as an alternative to the one-way ANOVA. The results of the mean ranking for the Kuskal-Wallis test are in Table 29.

Table 29 Resolution Method vs. Customer Satisfaction Score Mean Ranks


Resolution Method Customer Satisfaction OnSite Transition DS Total N 98 40 2415 2553 Mean Rank 1865.26 1358.69 1251.78

The Kruskal-Wallis test determines if the groups are different in some way, similar to the F test in standard ANOVA. The chi-square value from the Kruskal-Wallis test in Table 30 was 76.5, with a significance value less than .005. The results reject the null hypothesis that the customer satisfaction scores of the resolution method groups are equal. 125

Table 30 Resolution Method vs. Customer Satisfaction Kruskal-Wallis Test Results


Test Statistics Chi-square Df Asymp. Sig. a. Kruskal Wallis Test b. Grouping Variable: Resolution Method
a,b

Customer Satisfaction 76.460 2 .000

The values demonstrated that the customer satisfaction ratings of the service level had a significant difference by resolution method used to resolve the problem. Because the null hypothesis was rejected, a test to determine the degree of association between the two variables was necessary. Calculation of Spearmans correlation coefficient determined correlation. Spearman's correlation coefficient in Table 31 shows significant correlation, with an asterisk. Resolution method vs. customer satisfaction was significant, with a weak inverse correlation of -.156, showing that the use of distance support resolution corresponded with a slight decrease in customer satisfaction. Correlation was significant at the .01 level (2-tailed).

Table 31 Resolution Method vs. Customer Satisfaction Survey Score Correlation Test Results
Customer Satisfaction Resolution Method Correlation Coefficient Sig. (2-tailed) N
**. Correlation is significant at the 0.01 level (2-tailed).

-.156

**

.000 2553

126

Resolution Time vs. Customer Satisfaction The tests of hypothesis 6 compared trouble ticket resolution time to customer satisfaction. The hypotheses are H60: There is no significant relationship between the trouble ticket resolution time and customer satisfaction. H6a: There is a relationship between the trouble ticket resolution time and customer satisfaction.

Figure 11. Resolution time vs. customer satisfaction survey score mean distribution

To test the means of the dependent variable, the independent variable, resolution time, was categorized into eight time periods of lengths that equally distributed the cases into percentiles. Categorizing the trouble ticket resolution times into a relatively small number of time periods was more informative and useful for testing purposes. The eight 127

categories of resolution times were approximately 1 = 0.8 hours or less, to 8 = 716.3 hours or more. The resolution time underwent transformation into a categorical independent variable and customer satisfaction is an ordinal dependent variable. Figure 11 shows the average customer satisfaction scores as a function of the eight groups of resolution times. To test the null hypothesis that all population means were equal, the Kruskal-Wallis test was the appropriate test. The mean ranks of the Kuskal-Wallis test are in Table 32.

Table 32 Resolution Time vs. Customer Satisfaction Mean Ranks


Binned Resolution Time Customer Satisfaction 1 2 3 4 5 6 7 8 Total N 277 377 316 302 299 328 324 316 2539 Mean Rank 1421.19 1223.69 1257.21 1267.35 1420.11 1137.85 1221.73 1252.67

Results of the chi-square test are in Table 33. The observed significance level of the chi square was less than .005, meaning that the null hypothesis was rejected. The test revealed that the eight groups of resolution times did not have the same mean customer satisfaction scores. The two variables may have some correlation, so further tests were appropriate.

128

Table 33 Resolution Time vs. Customer Satisfaction Kruskal-Wallis Test Results

Test Statistics Chi-square df Asymp. Sig.

Customer Satisfaction 44.474 7 .000

a. Grouping Variable: Resolution Time (Binned)

Because both resolution time and customer satisfaction are ordinal, the appropriate statistical tests were the Somers'D test and Spearmans correlation coefficient. SomersD Test is an extension of the gamma test when one of the variables is considered dependent. It differs from gamma only in that the denominator is the sum of all pairs of cases that are not tied on the independent variable (Norusis, 2009). The SomersD test for directional measure in Table 34 shows a very weak inverse relationship when customer satisfaction is the dependent variable, meaning that when Resolution Time goes up, Customer satisfaction goes down. The correlation is significant at the .05 level.

Table 34 Resolution Time vs. Customer Satisfaction Directional Measures


Value Somers'D Symmetric Customer Satisfaction Dependent a. Not assuming the null hypothesis b. Using the asymptotic standard error assuming the null hypothesis. -.041 -.036 Asymp. S.E.
a

Approx. T

Approx. Sig. .014 .014

.017 .015

-2.467 -2.467

129

The Spearmans correlation coefficient in Table 35 shows that resolution time vs. customer satisfaction had a very weak inverse correlation of -.046, indicating that as resolution times increased, a slight decrease in customer satisfaction resulted. Correlation was significant at the .05 level (2-tailed).

Table 35 Resolution Time vs. Customer Satisfaction Score Correlation Test Results
Customer Satisfaction Binned Resolution Time Correlation Coefficient Sig. (2-tailed) N
*. Correlation is significant at the 0.05 level (2-tailed).

-.050

.011 2539

130

CHAPTER 5. DISCUSSION, IMPLICATIONS, RECOMMENDATIONS

Summary and Discussion of Results The objective of this study was to test a theoretical framework for KM and IT that ultimately leads to knowledge transfer. To test the theory, the study used eight years of trouble ticket data and customer satisfaction survey data from the U.S. Navys DS program. The Navy initiated the program to reduce excessive spending on labor for field technicians by using advancements in KM and IT at worldwide regional help desks and call centers. In industry, new tools have led to increased efficiency, lowered cost, and more satisfied customers, but leaders did not know if KM and IT initiatives could benefit customer support in an exceptionally demanding military environment. Literature review included three domains. The domains of customer support, KM, and IT were concepts that provided a foundation for the DS program from the projects origins (Brandenburg, 2002). The relevant literature established how the domains of KM and IT, when integrated properly, can impact the third domain of customer support. Knowledge management and IT have their greatest influence in improving knowledge transfer from call centers and help desks to the customers. The literature indicated that the DS program attempted to integrate KM and IT throughout the span of the program to create efficiencies in customer support while maintaining level of service.

131

The methodology used was a quantitative research approach based on secondary analysis of data collected from a customer support trouble ticket database and a customer satisfaction database. The trouble ticket organization was a non-experimental randomized selection design. The analysis of the customer satisfaction database was also non-experimental, but used all of the satisfaction surveys available due to the relatively low rate of return. The analysis included combining information from both of the databases to test six hypotheses relating to four different variables: program year, resolution method, resolution time, and customer satisfaction. The reason for selection of the four variables was because they could best assess efficiency and service level through the life of the program. The statistical analysis has shown that the introduction of KM and IT innovations into the customer support field can increase efficiency without negatively affecting the level of service. More remarkably, the results demonstrated that KM and IT could benefit a military customer support program where bandwidth and communication channels are limited. The programs trouble ticket resolution method has shown a continued trend toward lowering the number of required field technician visits to ships and shore sites, which is evidence of an increase in the efficiency of repairs. Trouble ticket resolution times have decreased and customer satisfaction survey scores have increased, which is evidence of a direct improvement in the level of service. In the theoretical framework of this study, Bartczak (2002) advocated that researchers should recognize knowledge transfer as the higher order goal over KM and IT. Such recognition requires upper levels of military leadership to integrate KM and IT requirements and provide resources to ensure the success of KM initiatives. Bartczak 132

(2002) identified many barriers that impede KM efforts, including leadership education and commitment, stovepipe approaches to funding, lack of resources, and an inability to show value to customers. By using variables closely tied to KM and IT, results of this study demonstrated quantitatively that barriers to success could be overcome to achieve knowledge transfer in customer support. Values from the three dependent variables tested were all collected from the military customer support database over an eight-year period. Achievement of knowledge transfer in customer support is apparent when a call center or help desk to resolves the problem of a support request, rather than an on-site field technician. The study focused on successful knowledge transfer from help desks and call centers, and chose quantitative variables subject to impact from the introduction of KM and IT initiatives. Efficiency To investigate efficiency, the quantitative unit of activity was the customer support trouble ticket, or case. Initiation of a case begins when a user reports a problem, and a case is over when the customer support personnel resolve the problem. Each case contains many quantifiable measures within it that researchers can further analyze for productivity and efficiency. In a related technical support study, Das (2003) used two measures of performance to study productivity: the case resolution time and the extent of the case escalation. These two performance measures were appropriate because they were both internal to the organization and process-oriented. Similarly, analysis in the current study used resolution time and case escalation to assess productivity of customer support work. This study, however, used the term resolution method rather than case escalation, and the variable had three possible levels of escalation. Also, the current 133

study used the term efficiency rather than productivity, but both terms are measurements of the labor and cost expended to close a trouble ticket case. The trouble ticket resolution method was an appropriate choice because it was a direct measurement of efficiency. Trouble tickets resolved remotely by a call center or help desk are less expensive than escalating the case to the more expensive option of sending a field technician to the ship or shore facility. Previous studies in customer support used the extent of trouble ticket escalation as a performance measure, because using field technicians to solve problems could quantifiably illustrate raising the cost of the resolution effort (Das, 2003). As coded in the current study, the analysis showed a positive, weak correlation between program year and resolution method. That is, the correlation showed a trend of increasing percentage of trouble tickets resolved using DS methods. More cases resolved using DS call centers translates to greater efficiency in repairs and resolutions. The results are evidence that the program has used KM and IT successfully to improve access to specific information and other support when necessary, through knowledge bases, content management, and document management. Knowledge management and IT provided more accurate and faster access to known errors, solutions, and recorded problems, and the customer support center achieved knowledge transfer. The findings suggest that CSRs became more adept at using the KM and IT tools as the years progressed. Strategically, the findings demonstrated the proper intent of customer support according to the research by Robinson and Morley (2006). In their study, the authors found that organizations primarily used customer support call centers as a method to 134

reduce costs, with customer support delivery a secondary consideration. The current study similarly found that in the DS program, the correct focus was on reducing costs through the positive application of KM and IT tools, even at the expense of more beneficial customer support delivery to the warfighter from on-site field technicians. Efficiency, however, must always be a consideration in the greater context of how it affects service level, since service level will ultimately have an effect on overall costs to the organization. The trouble ticket resolution time is a direct measurement of efficiency and service level, as reduced labor to resolve the issue is a reduction of labor costs, and previous studies showed that lower resolution times improve customer satisfaction (Byrnes, 2005; Meltzer, 2001; Newell, 2000; Read, 2002; Zeithaml et al., 1990). The analysis in the current study showed a weak inverse correlation with the year of the program, meaning that as time has increased, resolution times have decreased. The analysis also demonstrated a weak inverse correlation between resolution method and resolution time, meaning that an increased use of distance support resolution corresponded with a slight decrease in trouble ticket resolution times. The finding agreed with Das (2003), who found that trouble ticket escalation, which would be the opposite of using distance support, was positively associated with resolution time. A critical lesson of industry applicable to this military study is that efficiency must remain in context. As more tools become available to customer support management to quantitatively measure efficiency, the desire to continually perform measurement tasks relating to trouble ticket cases should not distract management. Performance measures must include far more than just efficiency, because customer 135

service and creating value for the organization includes far more than just efficiency. Customer support centers, like other aspects of a business, must consider return on investment. Results of the Robinson and Morley (2006) study showed that as customer support centers become the primary customer interface channel for an organization, the mismatch of metric irrelevancy may have undesirable consequences. Management must understand and consider measurements beyond efficiency, productivity, and labor time. Service Level Research has long shown that customer satisfaction is a direct measurement of service level (Berman, 2005; Berry, 1991; Khatibi et al., 2002; Zeithaml, 1988). Some research, however, has advanced that service level is merely a single component of a broader picture of customer satisfaction (Zeithaml & Bitner, 2003). The customer wholly determined the satisfaction score in the Zeithaml and Bitnerstudy, as the customer had to interpret the service level of the call center, help desk, or field technician. The Zeithaml and Bitner (2003) study analysis used the customer satisfaction survey score as the single measurement of service level, but other studies have recommended that measurement of customer satisfaction should be separate from service level to understand how customers evaluate service performance (Dabholkar et al., 2000). The analysis of satisfaction scores in the current study showed a moderate positive correlation between program year and customer satisfaction, meaning that customer satisfaction has tended to increase over time. The analysis also demonstrated a weak inverse correlation between resolution method and customer satisfaction, meaning the use of distance support resolution corresponded with a slight decrease in customer satisfaction. The correlation was very slight (-.156), and should not suggest that the more 136

expensive use of on-site field technicians would be preferable just to improve customer satisfaction. The most surprising finding of the current study was that trouble ticket resolution time hardly affected customer satisfaction at all. The results challenged previous studies, which have shown that resolution time is a significant factor in customer satisfaction (Blanding, 1991; Lele & Karmarker, 1983; Zeithaml et al., 1990). The correlation between resolution time and customer satisfaction was only -.050, the smallest correlation of all hypotheses tested. Although the correlation was significant, the small value suggested the time necessary to resolve an issue was not as important to the customer as previously understood. It seemed as long as customer support centers were actively engaged to repair or restore operation of a system, then military customers would understand and resolution time did not affect perceived service level. Although this finding was surprising, the result is not unfounded in previous studies. Feinberg et al. (2002), from a study of customer support centers within the financial services sector, found that none of the operational measures used in the research related significantly to customer satisfaction. The Feinberg et al. research suggested that most operational measures had no effect on customer satisfaction or perceived level of service. The lesson for management was that even measures thought of as significantly related to customer satisfaction have a weak influence on it. Perceived service level, however, remains an extremely important aspect of assessing customer support. Robinson and Morley (2006) found that many call center managers devoted an inappropriate amount of time to functions of their support program that had to do with customer service. As the current study has supported, the amount of 137

calls, calls per agent, the duration of calls, and other possible quantitative performance measures have little meaning if the customers are not pleased because of an unfavorable customer service experience. Efficiency and service level must undergo evaluation together to obtain an accurate assessment of customer supports value to an organization. Awareness and Growth One of the major goals of the Distance Support program is to increase the U.S. Navy's ability to collect and process relevant maintenance data required to implement cost-effective sustainability strategies (SPAWAR, 2005). To this end, the program dramatically achieved the goal. The most significant result of this study was evidence of a remarkable expansion of documented naval maintenance actions from 2002 to 2009. In 2002, records indicated U.S. Navy and Marine Corps customers on board ships and military installations contacted the centralized support centers less than 37,000 times. By December of 2009, the annual volume of trouble tickets grew to over 700,000 cases. Although investigating the programs expansion was not a purpose of this study, conclusions must address the large increase in awareness of the program. The massive increase in volume is clearly not attributable to increased maintenance actions, increased failures with equipment, or an increase in the size of the naval force, but rather an increase in the centralized documentation and accounting of maintenance actions and support requests Navy-wide. The centralized accounting was due in large part from raised awareness and focus from the very highest levels of the Navys leadership, directed at the sailors and Marines maintaining ships, aircraft, and submarines on a daily basis. Prior to the DS program, regional field technicians often completed repairs and support requiring technical expertise beyond organic capabilities 138

without accounting for the effort outside of the local unit. The program successfully created awareness for documenting maintenance with a standard that had never existed before, resulting in the ability to measure and interpret maintenance effort and the associated costs for the U.S. Navy. Knowledge Transfer from KM and IT Support The premise of this study was that supportive use of KM and IT could lead to successful knowledge transfer. In the military customer support call center, resolved trouble tickets cases can measure successful knowledge transfer. Measures of efficiency and service level underwent analysis to determine the impact of KM and IT on knowledge transfer on the Navys DS program. The findings of the research into the DS program may be due to cultural influences and the organizational climate of the Navy. The climate of the DS program is rooted in an IT infrastructure, and studies support that a robust IT capability is a necessary, although not exclusive condition, for modern organizational effectiveness (Davenport, 1994). Information technology on its own cannot create organizational efficiencies and savings, mainly because organizational personnel decide the architecture and procedures for the system to achieve the intended results (Nakata et al., 2008). The key to achieving business value with KM and IT may be to understand that computers and technology do some things well, but humans do other things well (Mohamed et al., 2006). Research shows that many of the failures of KM and IT are the result of continued attempts to force one paradigm to operate within the realm of the other. When group factors such as organizational climate and personnel matters are positive, leaders acknowledge, encourage, and admire the advantages of a strong IT 139

structure in an organization. When the climate is negative, leaders are more likely to disregard the IT resources and not use them to the greatest potential (Brown & Starkey, 1994). Although the greatest evidence of the success of the Navys DS program is the remarkable improvement in accountability and reporting of maintenance actions, the increase in trouble ticket volume may suggest a violation in the validity of the study. An assumption of the study is that users completed trouble tickets for all problem resolutions, but the increase over the eight-year period implied that users were not completing trouble tickets for maintenance actions during the early years of the program. Therefore, some may argue that analyzing trouble tickets is not a valid measure if technicians did not enter maintenance actions in the customer support database when the program started. Examination of the data, however, showed that even if technicians entered tickets for all maintenance actions, having initial DS resolution rates match the 99%-plus rates of the later years seems unlikely. Indeed, a far more likely situation is that the undocumented resolution methods in the early years were from on-site assistance. Onsite field technicians were available at that time, but attempting remote resolution from a call center was not yet an established practice. These represent exactly the undocumented repairs for which the Navy was trying to account to better understand maintenance costs. Over an eight-year period, the volume of maintenance actions tracked in the centralized customer support database increased from 36,105 per year to 700,633 per year, or 19 times more cases. The notable expansion alone is a testament to the programs success in ensuring naval personnel use the centralized support resources 140

available to them and report maintenance and support needs when necessary. One of the desired objectives of this particular customer support program has been to provide every user an equivalent experience, regardless of geographic location (Chief of Naval Operations, 2007). Using the programs advanced KM and IT tools, the military customers had the ability to connect to customer support call centers, despite the environmental constraints of remote locations overseas or underway. The current findings challenged Schulte and Samples (2006) study that proposed major obstacles may prevent the implementation of KM solutions in a military environment, and that these barriers may have an effect on a customer support program that relies on KM and IT for success. From a background of rarely existing KM barriers in the commercial environment, findings of this study demonstrated that leaders could successfully implement KM solutions in the military, with proper support provided and with the right reasons underpinning the implementation. Davenport (1998) observed that with the many technologies available, and a growing number of commercial tools, the interest in establishing standards for representing knowledge in the customer support industry would increase. The current study showed that a worldwide customer support enterprise requires standard KM tools to survive and be successful. Implementing standardization in the tools used for customer support has been a continuing challenge for the Navy. Since the programs initiation, the DS program has successively published many documents to standardize tools and procedures, including overarching business rules to manage support requests for every participating entity in the network. A configuration control board

141

ensures that all proposed changes to the programs business rules are viable and will not inadvertently impact existing operations within the service (NAVSEA, 2009). The Navy continues to stress efficiency in military operations, and initiatives that can prove their value are a welcome addition in an era of budget deficits. The DS program is part of a range of efforts that target affordability and attempt to control cost growth. With recognizable incentives, the program made innovation an objective from the start and reduced non-productive processes and bureaucracy. An effort towards efficiency is all about delivering more capability to military programs that need it, especially front-line personnel and projects in the war on terror. The study supports the findings of Ray (2008), which recognize the need to collect and retain corporate information through cooperation of knowledge workers in customer support. Centralized storage and retrieval of information is critical to knowledge management, especially in the support of military customers operating in a time-sensitive environment. The DS program insisted on documentation and retention of verifiable solutions to improve the timely application of the knowledge base when needed. The imperative of military operations may have contributed to the demonstrable success, as opposed to a typical industry call center or help desk without the motivation or collective purpose of a cooperative, worldwide unity of effort. In many ways, the Navys DS program advanced over time through two of the three levels of Hirschheim, Schwarz, and Todds (2006) IT maturity modelfrom competency to credibility, and pursuing commitment. The lowest level involved establishing competency in providing basic systems and services, evidenced by an enterprise-wide network of worldwide customer support centers. The second level 142

established credibility, referring to the programs ability to deliver the right systems to the right sites, both ship- and shore-based, on time and within budget, which satisfied a specific business need. The highest level of the Hirschheim et al. model relates to commitment, which the programs management is striving to achieve. At this level, IT serves as a strategic partner, capable of solving strategic business problems in the organization. The DS program is well on the way of achieving this level with continued support from the top leadership of the Navy. Other authors have researched the value of KM and IT initiatives in customer support, either singularly or together, to determine business value through competitive advantage (Beatty et al., 2001; Omar et al., 1997; Ribiere et al., 2004). The current study, however, filled a knowledge gap in the literature by focusing on and assessing the relationships among three KM and IT variables simultaneously: resolution time, resolution method, and customer satisfaction. Just as importantly, the study focus was on customer support in the military, an increasingly relevant field for the United States because of an increasing need for upkeep of aging military systems and platforms (Mathaisel, Cathcart, & Comm, 2004).

Conclusions This study supported Bartczaks (2002) theory that military organizations have a unique context in which to deploy KM and IT to operate successfully, and that successful use of KM, built on an IT foundation, can improve customer satisfaction in a customer support role (Omar et al., 1997). The research used a military customer support database to develop performance metrics and trend analysis products, which can further help 143

customer support program managers monitor product reliability, availability, maintainability, and supportability. This study made the following contributions to academic literature: Provided a method to assess KM and IT initiatives together. Developed the first method to comprehensively assess customer support in a military environment. Found evidence to support the theory that knowledge transfer is in a hierarchal relationship with KM and IT. Uncovered significant customer satisfaction survey collection issues in military customer support. Demonstrated that customer support efficiency does not negatively impact customer satisfaction in the military environment. Found that the introduction of KM and IT tools had a positive impact on resolving customer support requests remotely, shortening resolution times, and improving service level. Demonstrated that the customer support resolution method has a statistically significant but minimal impact on both the time necessary to resolve the problem and customer satisfaction in this unique environment. Discovered evidence that resolution time has very minimal impact on customer satisfaction scores in a military customer support program.

The results of this study are consistent with previous research finding a positive relationship between KM/IT initiatives and customer support (Beatty et al., 2001; Nakata et al., 2008; Omar et al., 1997). The minimal interaction of customer satisfaction with customer support resolution time was a new finding attributed to this study. The results challenge the traditional assumption that customer support managers should be as focused on service level as they are on efficiency. Management should continue to scrutinize and, if necessary, act upon metrics on customer satisfaction. 144

Measuring customer support must not depend on operational measures that focus solely on calls, but instead on the call outcome as experienced by customers (Robinson & Morley, 2006). The level customer satisfaction scores over the increasing trouble ticket resolution time is likely a result of an engaged and attentive naval customer support network attempting to resolve the issue, no matter the length of time. A lack of attention to service level from support providers will surely result in dissatisfied military customers and increased costs to repair critical combat systems. The study results indicated the crucial need for an organization, whether commercial or military, to recognize the tremendous resource of knowledge in its personnel and to research ways to cooperate with others for a competitive advantage. Smith (2001) supported these conclusions and recommended that organizations develop interactive and sharing environments for new initiatives. Smith suggested a reward system to help encourage employee participation would assist in building a trusting relationship and openness that encourages knowledge sharing throughout the organization. An explicit reward system for CSRs was not apparent in the DS program, but findings of the current study recommend implementation of such a system to ensure the continued success of this initiative. One of the main outcomes of implementing KM is that knowledge management turns experience and information into results (Honeycutt, 2000). The current findings support previous research showing that KM provides a competitive advantage among organizations by using institutional knowledge, reducing staff time to search for information, duplicating less work, more efficiently using customer service, and more time spent improving the overall services (Stoll, 2004). Although the focus of this study 145

was on the military, all of these factors can contribute to any organizations competitiveness in the marketplace.

Limitations The data from trouble tickets and customer satisfaction surveys were sufficient for analysis of the research questions and statistical testing of the hypotheses. Limitations in the study, however, were unavoidable. The research focus was on a U.S. Navy customer support program, which had implemented KM and IT initiatives to increase efficiency. Because the Navy represents only one of the three major components of the military, the findings may not be generalized or transferable to the Department of the Army or the Department of the Air Force. The trouble ticket random sample was very large. While this large sample size contributed to increasing external validity, it reduced the conclusion validity. Because the chi-square statistic was the base of the tests, even small differences were statistically significant when testing a large number of cases. This may mean that the differences had little practical implication. Other limitations included the relatively low survey response rate within the Distance Support program. Only 3505 surveys were available for review, due mainly to low response rates from the military customers. Another factor for the low number is the delay by regional customer support centers in implementing and executing an online customer feedback system with the ability to forward responses to a central control point. The number of surveys still allowed a 95% confidence level with a 1.71 confidence interval for a population of over 2.8 million trouble tickets. The analysis

146

might have benefited from a larger response rate and greater participation from the military customers. A significant limitation was that the analysis used only quantitative data. Qualitative, in-depth research methods may offer better insights on the impact of KM and IT in the military customer support environment. For example, the application of computer systems and examination of how well customers receive support might produce clearer results from data collected from interviews of CSRs and military personnel. The military customers offered qualitative data in the comments within customer satisfaction surveys, but such data was not part of the evaluation for this study.

Recommendations for Practice Based on the research, a number of recommendations emerged from this study. Customer support centers must resist the temptation to use the typical easily attained operational measures, such as average speed to answer, resolution time, and average number of calls, as management tools. These measurements relate to CSR performance and can be determined without much difficulty, as compared to direct measurement of customer perceptions (Jaiswal, 2008). Measuring operational variables is so common in industry that researchers assume these variables are vital indicators for a customer support center. Actually, the primary reason for measuring these values is the presence of the many automated tools available to perform the task (Feinberg et al., 2000). These operational variables, however, rarely reflect customers perceptions about service level. Customer support centers must seek out other variables to determine customer satisfaction and service level. 147

A balanced scorecard approach might best achieve management of the militarys regional customer support call centers. The balanced scorecard provides for multiple measures of achievement across many different dimensions (Kaplan & Norton, 1996). A balanced scorecard approach would ensure that customer service, financial, staffing, and productivity dimensions are all considerations in the management of customer support. For example, a study conducted by a customer call center certification service showed 93% of surveyed companies collected customer opinions (Benchmark Portal, 2006). Within that group, 67% failed to use the collected information to influence internal change. The proper balanced scorecard would use customer opinion to a greater extent and would also create a balance between short-term and long-term objectives. Research has shown that high-performing customer support centers are more likely to share the results of customer satisfaction surveys with the organizations management outside the customer support center (Davis, 2007). By sharing the results of surveys with other management groups, the customer support team is in a position to make acting on the customer input more probable. The balanced scorecard also finds a balance between internal and external viewpoints, between lagging and leading indicators of performance, and between financial and nonfinancial measures (Kaplan & Norton, 1996). The military should maintain metrics as a major indicator of management performance while expanding the focus of KM and IT to become more effective. A balanced scorecard framework can make metrics, the metrics criteria, and their relative importance clear and transparent. Most importantly, the balanced scorecard is built on clear linkages of the performance measures to business strategy (Kaplan & Norton, 148

2001). In industry, a lack of focus on strategy is a key weakness in the management of customer support call centers. Management must understand the major contributors to success, as well as how customer support contributes to the overall organization, and include them in measures of customer support performance and management. The military customer support centers that supplied data for this study continue to work to find balance between customer service and efficiency. A valuable lesson from industry is that one of the predictors of long-term profitability is a high level of customer service, but not necessarily high levels of efficiency (Wallace et al., 2000). Military leadership in general would do well to adopt a commitment for continued quantitative and conceptual research in knowledge management and information technology, to improve not only customer support, but also other fields such as engineering, logistics, and project management. Military customer support must improve the methods of capturing and evaluating the actions resulting from customer feedback. A customer support center loses credibility when it solicits feedback but fails to take action based on the feedback. To achieve success, military customer support centers should view the customer satisfaction surveys as more than simply another measurement to report. Reports often demonstrate high customer satisfaction, but do little to act on the customers identified ideas for improvement. Too often, customer support centers view the customer satisfaction surveys as a management tool, rather than the voice of the customer. Military leaders must continue to first consider the kind of value they want to create and then consider how IT can help achieve those goals. Satisfaction surveys offer tremendous value to any organization, especially when combined with the tools of IT. In 149

the military, as in industry, leaders frequently consider IT as an end to itself that can create efficiencies alone, rather than using IT as a tool that can benefit labor and effort. To achieve the efficiencies the military needs, the leaders should look across an entire range of products that systems commands and program offices produce. With holistic decisions, military leaders can make the best use of IT resources to keep warfighters equipped and capable. Military leaders must examine satisfaction internally to ensure that the DS program and other military customer support programs maintain the high value to the organization they serve. As Zemke (2002) found, customer support personnel are valuable and reliable resources with the ability to effectively determine the performance of the organization within their particular environment. Leaders should poll CSRs regularly to ensure they are getting quality support themselves, and if results show signs of CSR distress, than leaders should take action to ensure removal of barriers to effective performance.

Recommendations for Future Research The research, analysis, and findings have revealed many additional areas of concern that researchers can investigate in future studies. Future studies could fill the knowledge gap within the field of customer support by using the current research as a broad foundation or starting point for another area. One area of concern that arose from this study was the costs of customer support labor. Determination of labor costs could assist military leaders make difficult decisions about fielding new technologies, or maintaining and upkeep of aging systems and platforms. Future studies could compare 150

average labor costs to resolve tickets at yearly intervals to yield cost analysis information. Researchers could assign a unit cost to each of the three resolution methods and then determine the average costs for on-site repairs by a local field technician, repairs requiring a dispatched field technician, and repairs resolved through a call center or help desk. Application of regression testing can show trends to predict future costs, as well. Another possibility for future research is to add a layer variable to the statistical tests. A layer variable, or control variable, used in the procedures and the associated statistics could measure each value of the layer factor or a combination of values for two or more control variables. For example, in an analysis of resolution method over time, percentages may differ according to regional support site or the problem category, so future research could further cross-classify by whether the site or the problem category had a significant impact on the support request resolution method. An analysis such as this could reveal factors that might concern managers if more obvious reasons were not the cause of the differences. Future research could use multiple measures besides customer satisfaction to determine service level. Service levels in customer support can mean many things to many different organizations, but generally, service levels include the services offered and the products supported, a definition of response times for each service, the stated levels of quality, and the speed of services. For example, a future study could measure and analyze the speed to answer calls, the call abandonment rate, first-call resolution, and the number of complaints in a month. The future researcher would need access to more sophisticated data tracking mechanisms, but the resulting analysis could paint a broader

151

picture of service level than a simple analysis investigating only overall customer satisfaction ratings. Another recommendation is to investigate the activities of customer support work to determine where to find efficiencies. The tasks that typically take the longest in customer support are synthesis and diagnosis of the problem (Das, 2003). Quantifying these tasks is a step in the right direction for further research. New tools might be available to improve the method used to solve technical issues, but leaders must first recognize and address the gaps in timely, proficient support. One of the limitations of the current study was that it only used a quantitative methodology to observe and measure numerical data. A future study could use a qualitative methodology or even mixed methods that involve in-depth interviews of customers, CSRs, and senior customer support managers. The interviews could use a questionnaire based on insights obtained from a review of literature on customer support, customer satisfaction, or KM in the military. Qualitative research could be especially useful in this environment, because this type of investigation brings researchers closer to the situation under study. The different context of understanding could provide more flexibility to the research and a wealth of information that quantitative methods cannot discover.

152

REFERENCES Academy of Management. (n.d.). Code of ethics. Retrieved from http://www.aomonline.org/ Ahmed, P. K., Lim, K. K., & Zairi, M. (1999). Measurement practice for knowledge management. Journal of Workplace Learning, 11(8), 304-311. doi:10.1108/13665629910300478 Air, Land, Sea Application Center. (2003, September). JTF IM multi-service tactics, techniques, and procedures for Joint Task Force information management. Langley Air Force Base, VA: Air, Land, Sea Application Center. Retrieved from http://www.stormingmedia.us/86/8664/A866404.html Al-Hawamdeh, S., & TanWoei, J. (2001). Knowledge management in the public sector: Principles and practices in police work, Journal of Information Science, 27(5), 311-318. Retrieved from ABI/INFORM Global database. Armistead, C. G., & Clark, G. (1991). A framework for formulating after-sales support strategy. International Journal of Operations & Production Management, 11(3), 111-124. doi:10.1108/EUM0000000001269 Armistead, C. G., & Clark, G. (1992). Customer service and support. London, England: Pitman. Athaide, G. A., Meyers, P. W., & Wilemon, D. L. (1996). Seller-buyer interactions during the commercialization of technological process innovations. Journal of Product Innovation Management, 13, 406-421. doi:10.1016/07376782(96)00038-0 Babbie, E. (1995). The practice of social research (7th ed.) Belmont, CA: Wadsworth. Barley, S. R. (1996). Technicians in the workplace: Ethnographic evidence for bringing work into organization studies. Administrative Science Quarterly, 41, 404-441. doi:10.2307/2393937 Barney, J. B., & Ketchen, M. W. (2001). Resource-based theories of competitive advantage: A ten-year retrospective on the resource-based view. Journal of Management 27, 625-642. doi:10.1016/S0149-2063(01)00115-5 153

Bartczak, S. E. (2002). Identifying barriers to knowledge management in the United States military. (Doctoral dissertation, Auburn University). Retrieved from ProQuest Dissertations and Theses database. (AAT 3071350) Bassi, L. (1997). Harnessing the power of intellectual capital. Training & Development. 51(12), 25-28. Retrieved from ABI/INFORM Global database. Beatty, R. C., Shim, J. P., & Jones, M. C. (2001). Factors influencing corporate website adoption: A time-based assessment. Information and Management, 38(6), 337354. doi:10.1016/S0378-7206(00)00064-1 Benchmark Portal. (2006). The ultimate service improvement solution. Retrieved from http://www.benchmarkportal.com/pressrelease/2006/01/12/. Bennett, A. (Ed.). (2000). DON KCO toolkit, Department of the Navy, Office of the Chief Information Officer. Washington, DC: U.S. Department of Defense. Berg, J., & Loeb, J. (1990). The role of field service in new product development and introduction. AFSM International: The Professional Journal, 14(9), 25-30. Retrieved from ABI/INFORM Global database. Berman, B. (2005). How to delight your customers. California Management Review, 48(1), 129-151. Retrieved from ABI/INFORM Global database. Berry, L. L. (1991). Mistakes that service companies make in quality improvement. Bank Marketing, 23(4), 68. Retrieved from ABI/INFORM Global database. Bitner, M., Booms, B., & Mohr, L. (1994). Critical service encounters: The employees viewpoint. Journal of Marketing, 58(4), 95-106. Bitner, M., Booms, B., & Tetreault, M. (1990). The service encounter: Diagnosing favorable and unfavorable incidents. Journal of Marketing, 54(1), 71-84. doi:10.2307/1252174 Bixler, C. (2002, May). Knowledge management: A practical solution for emerging global security requirements. KM World, 11(5), 18-28. Retrieved from http://www.kmworld.com/Issues/293-May-2002-5bVolume-112c-Issue-55d.htm Blanding, W. (1991). Customer service operations: The complete guide. New York, NY: AMACOM. Brandenburg, C. F. (2002, November). Distance support and readiness [PowerPoint]. Presentation to NDIA conference. Retrieved from http://www.dtic.mil/ndia/2001systems/brandenburg.pdf 154

Brandenburg, C. F. (2010). Distance support North Star Vision & DS transformation [PowerPoint]. Retrieved from https://secure.anchordesk.navy.mil/anchordesk/discuss.nsf Broetzmann, S. M., & Grainer, M. (2005, November). First results of the 2005 National Customer Rage study. Alexandria, VA: Customer Care Alliance. Retrieved from http://www.ccareall.org/read.html#white Brown, J. P. (2003). Evaluating a technical support knowledge base: A case study in strategy, methods, and organizational change. (Doctoral dissertation, Indiana University). Retrieved from ProQuest Dissertations & Theses database. (AAT 3097375) Brown, A. D., & Starkey, K. (2000). Organizational identity and learning: A psychodynamic perspective. The Academy of Management Review, 25(1), 102120. doi:10.2307/259265 Bureau of Labor Statistics, U.S. Department of Labor. (2009). Customer service representatives. Occupational outlook handbook (2010-11 ed.). Retrieved from http://www.bls.gov/oco/ocos280.htm Byrnes, J. (2005, January 10). Nail customer service. Harvard Business School Working Knowledge. Retrieved from http://hbswk.hbs.edu/ Carr, N. (2003, May). IT doesn't matter. Harvard Business Review, 81(5), 41-49. doi:10.1109/EMR.2004.25006 Cenfetelli, R. T., & Benbasat, I. (2002, June). Measuring the e-commerce life cycle. Proceedings of The Tenth European Conference on Information Systems. Gdansk, Poland. Cespedes, F. V. (1995). Concurrent marketing. Boston: Harvard Business School Press, 243-266. Chatzkel, J. (2002). Conversation with Alex Bennet, former deputy CIO for enterprise integration at the U.S. Department of the Navy. Journal of Knowledge Management, 6(5), 434. doi:10.1108/13673270210450397 Chief of Naval Operations. (2007). Memorandum for distribution: Navy distance support policy. Retrieved from http://www.anchordesk.navy.mil/training /DistanceSupportPolicy20032207.pdf Christopher, M., Payne, A., & Ballantyne, D. (1991). Relationship marketing. Oxford, UK: Butterworth-Heinemann. 155

Clark, G. (1988, November). Chairman's address. Proceedings of the First International Conference on After-Sales Success, London, 29-30, 3-10. Retrieved from ABI/INFORM Global database. Commander, Naval Surface Forces. (2005). Distance support help desk policy memorandum of agreement. Washington DC: U.S. Department of Defense. Commander, Space and Naval Warfare Systems Command. (2005). Distance support help desk policy (SPAWARINST 4792.1). Washington DC: U.S. Department of Defense. Commander, Space and Naval Warfare Systems Command. (2009). Team SPAWAR strategic plan 2010-1015. Washington DC: U.S. Department of Defense. Cooper, C. R., & Schindler, P. S. (2008). Business research methods (10th ed.). Boston, MA: McGraw-Hill. Corder, J. L., & Foreman, T. C. (2009). Nonparametric statistics for non-statisticians: A step-by-step approach. New York, NY: Wiley. Creative Research Systems. (2009). Sample size calculator. Retrieved from http://www.surveysystem.com/sscalc.htm Creswell, J. W. (2009). Research design (3rd ed.). Thousand Oaks, CA: Sage. Dabholkar, P. A., Shepherd, C. D., & Thorpe, D. I. (2000). A comprehensive framework for service quality: An investigation of critical conceptual and measurement issues through a longitudinal study. Journal of Retailing, 76(2), 139-73. doi:10.1016/S0022-4359(00)00029-4 Das, A. (2003). Knowledge and productivity in technical support work. Management Science, 49, 416-431. doi:10.1287/mnsc.49.4.416.14419 Davenport, T. H. (1994). Saving ITs soul: Human-centered information management. Harvard Business Review 72(2), 119-131. Davenport, T. H., & Klahr, P. (1998). Managing customer support knowledge. California Management Review, 40(3), 195-208. Retrieved from ABI/INFORM Global database. Davidow, W. H. (1986). Marketing high technology: An insider's view. New York, NY: The Free Press.

156

Davis, W. A. (2007). Voice of the customer: Listening through the call center. (Doctoral dissertation, Capella University). Retrieved from ProQuest Dissertations & Theses database. (AAT 3284056) Doomun, R., & Jungum, N. V. (2008). Business process modeling, simulation, and reengineering: Call centres. Business Process Management Journal, 14, 838-848. doi:10.1108/14637150810916017 Duffy, J. (2001). The tools and technologies needed for knowledge management. Information Management Journal, 35(1), 64-68. Retrieved from ABI/INFORM Global database. Eaton, J., & Struthers, C. (2002). Using the Internet for organizational research: A study of cynicism in the workplace. CyberPsychology & Behavior, 5, 305-313. doi:10.1089/109493102760275563. Ehrlich, K., & Cash, D. (1994). Turning information into knowledge: Information finding as a collaborative activity. Proceedings of the First Annual Conference on the Theory and Practice of Digital Libraries (pp. 1-8). College Station, TX: Texas A&M University. Feeny, D. (2001). Making business sense of the e-opportunity. MIT Sloan Management Review, 42(2), 41-51. Retrieved from ABI/INFORM Global database. Feinberg, R., Kim, I., Hokama, L., de Ruyter, K., and Keen, C. (2000). Operational determinants of caller satisfaction in the call center. International Journal of Service Industry Management, 11(2), 131-141. doi:10.1108/09564230010323633 Field, A. P. (2000). Research methods I: SPSS for Windows, part 2. Retrieved from http://www.sussex.ac.uk/Users/andyf/teaching/first/eda.pdf Froehle, C. M. (2006). Service personnel, technology, and their interaction in influencing customer satisfaction. Decision Sciences, 37(1), 5-38. doi:10.1111/j.15405414.2006.00108.x Gardner, M. J., & Altman, D. G. (Eds.). (1989). Statistics with confidence (pp. 103-105). London, UK: BMJ Publishing Group. Goffin, K. (1998). Customer support and new product development: An exploratory study. Journal of Product Innovation Management, 15(1), 42-56. doi:10.1111/1540-5885.1510042 Goffin, K. (1999). Customer support: A cross-industry study of distribution channels and strategies. International Journal of Physical Distribution & Logistics Management, 29(6), 374-397. doi:10.1108/09600039910283604 157

Grebner, S., Semmer, K., Lo Faso, L., Gut, S., Kalin, W., & Elfering, A. (2003). Working conditions, well-being, and job-related attitudes among call centre agents. European Journal of Work and Organizational Psychology, 12, 341-365. doi:10.1080/13594320344000192 Hair, J. F., Anderson, R. E., Tatum, R. L, & Black, W. C. (1998). Multivariate data analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall. Han, F. (2001). Understanding knowledge management. Public Manager, 30(2), 34-35. Retrieved from ABI/INFORM Global database. Hanley, S. (2001), Metrics guide for knowledge management initiatives, Chief Information Officer, Department of the Navy. Washington, DC: Department of the Navy. Hillmer, S., Hillmer, B., & McRoberts, G. (2004). The real costs of turnover: Lessons from a call center. Human Resource Planning; 27(3), 34-41. Retrieved from ABI/INFORM Global database. Hirschheim, R., Schwarz, A., & Todd, P. (2006). A marketing maturity model for IT: Building a customer-centric IT organization. IBM Systems Journal, 45(1), 181184. doi:10.1147/sj.451.0181 Honeycutt, J. (2000). Knowledge management strategies. Redmond, WA: Microsoft. Hughes, E. C. (1971). The sociological eye. Chicago, IL: Aldine-Atherton. Hull, D. L., & Cox, J. F. (1994). The field service function in the electronics industry: Providing a link between customers and production/marketing. International Journal of Production Economics, 37(1), 115-126. Retrieved from ABI/INFORM Global database. Ives, B., & Vitale, M. R. (1988). After the sale: Leveraging maintenance with information technology. MIS Quarterly, 12(1), 7-21. doi:10.2307/248797 Ives, B., & Mason, R. O. (1990). Can information technology revitalize your customer service? The Academy of Management Executive, 4(4), 52-69. Retrieved from ABI/INFORM Global database. Jaiswal, A. K. (2008). Customer satisfaction and service quality measurement in Indian call centres. Managing Service Quality, 18(4), 405-416. doi:10.1108/09604520810885635

158

Jones, T. O., & Sasser, E. W., Jr. (1995). Why satisfied customers defect. Harvard Business Review, 73(6), 88-99. doi:10.1061/(ASCE)0742-597X(1996)12:6(11.2) Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard. Boston, MA: Harvard Business School Press. Keiningham, T. L., Goddard, M. K. M., Vavra, T. G., & Iaci, A. J. (1999). Customer delight and the bottom line. Marketing Management, 8(3), 57-63. Khatibi, A. A., Ismail, H., & Thyagarajan, V. (2002). What drives customer loyalty: An analysis from the telecommunications industry. Journal of Targeting, Measurement, and Analysis for Marketing, 11(1), 34-44. doi:10.1057/palgrave.jt.5740065 Knapp, D. (1999). A guide to customer service skills for the help desk professional. Cambridge, MA: Course Technology Group. Knecht, T., Lezinski, R., & Weber, F. A. (1993). Making profits after the sale. The McKinsey Quarterly, 4, 79-86. Retrieved from ABI/INFORM Global database. Koenig, M. (1999). Education for knowledge management. Information Services & Use. 19(1), 17-31. doi:10.1108/00242530310456979 Laudon, K. C., & Laudon, J. P. (2003). Management information systems: Managing the digital firm (8th ed.). Upper Saddle River, NJ: Pearson Prentice-Hall. Lele, M. M., & Karmarkar, U. S. (1983). Good product support is smart marketing. Harvard Business Review, 61(6), 124-132. Retrieved from ABI/INFORM Global database. Lele, M. M., & Sheth, J. N. (1987). The customer is key. New York, NY: Wiley. Lenz, V. (1999). The Saturn difference: Creating customer loyalty in your company. New York, NY: Wiley. Lohr, S. (1998). Sampling: Design and analysis. Pacific Grove, CA: Duxbury. Longoria, J. (1996). Focus on: Help desk applications and technologies. Telemarketing and Call Center Solutions, 148, 52-55. Retrieved from ABI/INFORM Global database. Loomba, A. P. S. (1996). Linkages between product distribution and service support functions. International Journal of Physical Distribution & Logistics Management, 26, 4-22. doi:10.1108/09600039610116486 159

Mahesh, V. S., & Kasturi, A. (2006). Improving call centre agent performance: A UKIndia study based on the agents point of view. International Journal of Service Industry Management, 17(2), 136. doi:10.1108/09564230610656971 Maltz, A., & Maltz, E. (1998). Customer service in the distributor channel: Empirical findings. Journal of Business Logistics, 19, 103-129. Retrieved from ABI/INFORM Global database. Mathaisel, D. F., Cathcart, T. P. & Comm, C. L. (2004). A framework for benchmarking, classifying, and implementing best sustainment practices. Benchmarking, 11(4), 403-417. doi:10.1108/14635770410546791 McCrea, E., & Betts, S. (2008). Failing to learn from failure: An exploratory study of corporate entrepreneurship outcomes. Academy of Strategic Management Journal, 7, 111-132. Retrieved from ABI/INFORM Global database. McLaughlin, G. (2009). Introduction to multivariate statistical methods: Checking assumptions [PowerPoint]. Presentation to Applied Multivariate Modeling course. Retrieved from http://courseroom2.capella.edu Meltzer, M. (2001). A customer relationship management approach: Integrating the call centre with customer information. Journal of Database Marketing, 8, 232-243. doi:10.1057/palgrave.jdm.3240039 Melville, N., Kramer, K., & Gurbaxani, V. (2004). Information technology and organizational performance: An integrative model of IT business value. MIS Quarterly, 28(2), 283-322. Retrieved from ABI/INFORM Global database. Mitchell, K. D. (2002). Collaboration and information sharing: A ROI perspective. Public Manager, 31(1), 59-61. Retrieved from ABI/INFORM Global database. Mohamed, M. S., Ribire, V. M., O'Sullivan, K. J. & Mohamed, M. A. (2008). The restructuring of the information technology infrastructure library (ITIL) implementation using knowledge management framework. VINE, 38, 315-333. doi:10.1108/03055720810904835 Monger, J., & Keen, C. (2004). Case study: Agent-level customer feedback and the impact on first contact resolution. (White Paper). Customer Relationship Metrics. Retrieved from http://www.metrics.net/pdf/Agent_level_feedback_and_FCR.pdf Myburgh, S. (2002). Strategic information management: Understanding a new reality. The Information Management Journal. 36(1), 36-40. Retrieved from ABI/INFORM Global database.

160

Nakata, C., Zhu, Z., & Kraimer, M. (2008). The complex contribution of information technology capability to business performance. Journal of Managerial Issues, 20(4), 485-506. Retrieved from ABI/INFORM Global database. Naval Sea Systems Command. (2009). Distance support source of support (SOS) business rules. Washington DC: U.S. Department of Defense. Negash, S. (2001). Web-based customer support interface: Determinants and indicators. (Doctoral dissertation, The Claremont Graduate University). Retrieved from ProQuest Dissertations & Theses database. (AAT 3012307) Nelson, K. M., Nadkarni, S., Narayanan, V. K., & Chods, M. (2000). Understanding software operations expertise: A revealed causal mapping approach. MIS Quarterly, 24, 475-507. doi:10.2307/3250971 Newell, F. (2000). Loyalty.com: Customer relationship management in the new era of internet marketing. New York, NY: McGraw-Hill. Newsted, P. R., Huff, S. L., & Munro, M. C. (1998). Survey instruments in information systems. MIS Quarterly, 22(4), 553-554. doi:10.2307/249555 Norusis, M. J. (2009). SPSS 16.0 statistical procedures companion (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Office of the President of the United States. (2002). The Presidents management agenda, fiscal year 2000 (p. 13). Washington, DC: Office of Management and Budget. Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of satisfaction decisions. Journal of Marketing Research, 17, 460-469. doi:10.2307/3150499 Omar, A., Sawy, E., & Bowles, G. (1997). Redesigning the customer support process for the electronic economy: Insights from storage dimensions. MIS Quarterly, 21(4), 457-483. Retrieved from ABI/INFORM Global database. Pan, S. L., & Scarborough, H. (1999). Knowledge management in practice: An exploratory case study. Technology Analysis & Strategic Management, 11, 359374. doi:10.1080/095373299107401 Parasuraman, A., Berry, L. L., & Zeithaml, V. A. (1991). Refinement and assessment of the SERVQUAL. Journal of Retailing, 67, 420-449. Retrieved from ABI/INFORM Global database.

161

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future research. Journal of Marketing, 49(4), 4150. doi:10.2307/1251430 Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple-item scale for measuring customer perceptions of service quality. Journal of Retailing, 64, 12-40. Retrieved from ABI/INFORM Global database. Phillips, D. C., & Burbules, N. C. (2000). Postpositivism and educational research. Lanham, NY: Rowman & Littlefield. Piccoli, G., Spalding, B. R., & Ives, B. (2001). The customer-service life cycle: A framework for improving customer service through information technology. Cornell Hotel and Restaurant Administration Quarterly, 42(3), 38-45. doi:10.1016/S0010-8804(01)81023-5. Pitt, L. F., Watson, R. T., & Kavan, C. B. (1995). Service quality: A measure of information systems effectiveness. MIS Quarterly, 19, 173. doi:10.2307/249687 Pyrczak, F. (1995). Making sense of statistics: A conceptual overview (2nd ed.). Los Angeles, CA: Pyrczak. Read, B. (2002, May). The ultimate balancing act. Call Center Magazine, 15(5), 34-44. Ribiere, V., Park, H., & Schulte, W. D. (2004). Critical attributes of organizational culture that promote knowledge management technology success. Journal of Knowledge Management, 8(3), 106-117. Retrieved from ABI/INFORM Global database. Robinson, G., & Morley, C. (2006). Call centre management: responsibilities and performance. International Journal of Service Industry Management, 17, 284300. doi:10.1108/09564230610667122 Roos, I., & Edvardsson, B. (2008). Customer-support service in the relationship perspective. Managing Service Quality, 18(1), 87-107. doi:10.1108/09604520810842858 Ross, M. V., & Schulte, W. D. (2005). Knowledge management in a military enterprise: A pilot case study of the space and warfare systems command. In M. Stankosky (Ed.), Creating the discipline of knowledge management: The latest in university research (pp. 157-170). London, England: Elsevier/Butterworth-Heinemann. Saxby, D. (2005, January/February). Invest in training your CSRs. Electric Light & Power, 83(1), 58. 162

Schulte, W. D., & Sample, T. (2006). Efficiencies from knowledge management technologies in a military enterprise. Journal of Knowledge Management, 10(6), 39-49. doi:10.1108/13673270610709206 Schwartz, L., Ruffins, L., & Petouhoff, N. (2007). Reinventing your contact center: A managers guide to successful multichannel CRM. Upper Saddle River, NJ: Pearson Education. Sherman, G. J. (2002). Transforming to e-gov and e-defense. The Armed Forces Comptroller, 47(3), 36-38. Retrieved from ABI/INFORM Global database. Simon, H. A. (1981). The sciences of the artificial. Cambridge, MA: MIT Press. Singh, J. (2000). Performance productivity and quality of frontline employees in service organizations, Journal of Marketing, 64(2), 1534. doi:10.1509/jmkg.64.2.15.17998 Singh, V. (2008). Knowledge creation, sharing, and reuse in online technical support for open source software. (Doctoral dissertation, University of Illinois at UrbanaChampaign). Retrieved from ProQuest Dissertations & Theses database. (AAT 3314895) Smith, E. A. (2001). The role of tacit and explicit knowledge in the workplace. Journal of Knowledge Management, 5, 311-322. doi:10.1108/13673270110411733. Spencer-Matthews, S., & Lawley, M. (2006). Improving customer service: Issues in customer contact management. European Journal of Marketing, 40(1/2), 218-232. doi:10.1108/03090560610637392 Spreitzer, G. M., Cohen, S. G., & Ledford, G. E. (1999). Developing effective selfmanaging work teams in service organizations. Group and Organization Management, 24, 340-366. doi:10.1177/1059601199243005 Stahr, D. (2010). The story behind DS metrics: Metrics requests from people who want to know [PowerPoint slides]. Retrieved from https://secure.anchordesk.navy.mil/anchordesk/discuss.nsf Stoll, C. (2004). Writing the book on knowledge management. Association Management, 56(4), 56-62. Retrieved from ABI/INFORM Global database. Tefft, R. J. (2002). Army medical department knowledge management. Army AL&T, 2(1), 10-11. Teresko, J. (1994, February). Service now a design element. Industry Week, 243(3), 5152. Retrieved from ABI/INFORM Global database. 163

Tourniaire, F., & Farrell, R. (1997). The art of software support. Upper Saddle River, NJ: Prentice Hall. Trepper, C. H. (2000). E-commerce strategies. Redmond, WA: Microsoft. Trochim, W. M. (2006a). Descriptive statistics. Retrieved from http://www.socialresearchmethods.net/kb/statdesc.php Trochim, W. M. (2006b). The qualitative debate. Retrieved from http://www.socialresearchmethods.net/kb/qualdeb.php Trochim, W. M. (2006c). Unobtrusive measures. Retrieved from http://www.socialresearchmethods.net/kb/qualdeb.php Tuckman, B. (1988). Conducting educational research, (3rd ed.) New York, NY: Harcourt Brace Jovanovich. U.S. Fleet Forces Command. (2003). Serial N05450, subject: Fleet technical assistance (FTA) policy, 29 December 2003 [Naval message]. U.S. Fleet Forces Command. (2008). Serial N01/026, subject: Distance support capabilities based assessment plan, 20 March 2008 [Naval message]. U.S. Navy (n.d.). Navy organizationA look at the organization of the Navy. Retrieved January 3, 2010, from http://www.navy.mil/navydata/organization/org-top.asp Vehovar, V., Manfreda, K., & Batagelj, Z. (2001). Sensitivity of electronic commerce measurement to the survey instrument. International Journal of Electronic Commerce, 6(1), 31-51. Wallace, C. M., Eagleson, G., & Waldersee, R. (2000). The sacrificial HR strategy in call centers. International Journal of Service Industry Management, 11, 174-184. doi:10.1108/09564230010323741 Winter, D. C. (2008). Posture statement of Honorable Donald C. Winter, Secretary of the Navy. Retrieved from http://www.navy.mil/navydata/people/secnav/winter /2008_posture_statement2.pdf Wolf, C. G., Alpert, S. R., Vergo, J. G., Kozakov, L., & Doganata, Y. (2004). Summarizing technical support documents for search: Expert and user studies. IBM Systems Journal, 43, 564-586. doi:10.1147/sj.433.0564

164

Zahra, S. (1991). Predictors and financial outcomes of corporate entrepreneurship: An exploratory study. Journal of Business Venturing, 6(4), 259-285. doi:10.1016/0883-9026(91)90019-A Zeithaml, V. A. (1988). Consumer perceptions of price, quality, and value: A means-end model and synthesis of evidence. Journal of Marketing, 52(3), 2-22. doi:10.2307/1251446 Zeithaml, V. A., & Bitner, M. J. (2003). Services marketing (3rd ed.). Boston, MA: McGraw-Hill Irwin. Zeithaml, V. A., Parasuraman, A., & Berry, L. L. (1990). Delivering quality service: Balancing customer perceptions and expectations. New York, NY: The Free Press. Zemke, R. (2000). The best customer to have is the one you've already got. The Journal for Quality and Participation, 23(2), 33-35. Retrieved from ABI/INFORM Global database. Zemke, R. (2002). Managing the employee connection. Managing Service Quality, 12(2), 73-76. doi:10.1108/09604520210421374 Zimmerman, D. (1998). Invalidation of parametric and nonparametric statistical tests by concurrent violation of two assumptions. Journal of Experimental Education, 67(1), 55-68. doi:10.1080/00220979809598344

165

APPENDIX A: CUSTOMER SUPPORT REQUEST FORM

166

APPENDIX B: CUSTOMER SATISFACTION SURVEY

167

APPENDIX C: MOST ACTIVE NAVAL REGIONAL SUPPORT CENTERS Top 25 Sources of Support for Navy/Marine Corps Customers SPAWAR ITC NEW ORLEANS LA SPAWARSYSCEN CHARLESTON SC SPAWARSYSCEN NORFOLK VA GLOBAL DISTANCE SUPPORT CENTER, LOGISTICS NAVICP LSC NAVSEALOGCEN MECHANICSBURG PA FTSCLANT NORFOLK VA DLA NAVSURFWARCENDIV CRANE IN SPAWARSYSCEN SAN DIEGO CA COMFISCS FTSCPAC SAN DIEGO CA NEMAIS NAVSURFWARCEN SHIPSYSENGSTA PHILADELPHIA PA FISC NORFOLK VA RRAM NAVSURFWARCENDIV INDIANHEAD MD FISC SAN DIEGO CA NAVSURFWARCENDIV PORT HUENEME CA NAVUNSEAWARCENDIV KEYPORT WA COMNAVAIRSYSCOM PATUXENT RIVER MD COMNAVSEASYSCOM WASHINGTON DC GDSC TECHNICAL NAVICP MECHANICSBURG PA

168

You might also like