You are on page 1of 30

Demonstrating Service Desk Value Through More Meaningful Metrics

Written by

Daniel Wood, Head of Research, Service Desk Institute

Demonstrating Service Desk Value Through More Meaningful Metrics Written by Daniel Wood, Head of Research, Service
Demonstrating Service Desk Value Through More Meaningful Metrics Written by Daniel Wood, Head of Research, Service

In collaboration with

Demonstrating Service Desk Value Through More Meaningful Metrics Written by Daniel Wood, Head of Research, Service

Table of Contents

Introduction

3

Key Findings

3

 

Part One – Demonstrating Value

4

  • 1. Which of the following performance metrics do you currently measure?

4

  • 2. What one metric is most important to the Service Desk?

5

  • 3. Who is responsible for producing/measuring Service Desk metrics and reporting on them?

6

  • 4. How often do you produce Service Desk metrics reports?

7

  • 5. Do you communicate/publish your metrics targets?

7

  • 6. To whom do you present your Service Desk metrics?

8

  • 7. Do customers/end users ask to see metrics reports?

8

  • 8. Sharing metrics with the business has helped to improve the Service Desk’s relationship with the business

9

  • 9. Does the business make decisions based on the metrics you produce?

9

 

Part Two – Metrics and the Business

10

  • 1. What metrics/information does your business ask you for?

10

  • 2. What one metric is most important to the business?

11

  • 3. What information does your business want to see you produce more of?

11

  • 4. Which of these cost based metrics do you currently measure?

12

  • 5. Which, if any, of these 5 business value metrics do you currently measure?

12

 

Part Three – Business Value Metrics

13

Presenting Business Value Metrics

14

 

Moving Beyond the Basics – Delivering True Business Value

15

 

Part Four – Metrics Best Practice

16

The SDI 17 best practice metrics

16

Displaying Metrics Information – SDC best practice guidelines

16

 

Why do Metrics Matter?

22

 

Conclusion

23

 

Part Five – Interviews

24

2

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Introduction

Welcome to the SITS13 The Service Desk & IT Support Show white paper produced in collaboration with SDI and Cherwell Software. The motivation for this white paper started with an understanding that metrics have always been a vital component of our Service Desks. Since their very genesis, Service Desks have sought ways to understand their performance and set targets. Today, the story of metrics is one of reporting on a wide-range of performance measures, each designed to demonstrate that the Service Desk is delivering value and quality to their organisation. Metrics also provide management with the quantitative data they need to make accurate and reasoned decisions – there is widespread adherence to the mantra that ‘you can’t manage what you can’t measure’. However, in the IT world of today it is more than just the ability to manage, it is increasingly about ‘value’ and providing metrics to substantiate this to your organisation.

This is a research report of two halves: the first is a state of nation survey, which offers a view of metrics right here and now. What metrics are important to Service Desks? How are they using metrics in their everyday Service Desk life? Who are they sharing metrics information with and why? It also looks to the future: what should we be looking at now to ensure we stay relevant and meaningful in the years and decades to come? Which business related metrics are offering fresh insights and perspectives? What tools are available now to achieve this? This white paper will reveal the answers and aims to prepare you for the future.

The second half provides additional best practice metrics guidance from the Service Desk industry’s leading authority, the Service Desk Institute, accompanied by interviews with the respondents to our online survey.

Executive Summary

This white paper identifies the range and use of metrics today in the Service Desk industry. The results for this survey, which ran in January and February 2013, were obtained from an online survey sent to more than 5,000 ITSM professionals. Additional evidence and opinion was gleaned from personal interviews conducted with Service Desk professionals and consultants by the author - their insights provide valuable context to the quantitative data displayed in this white paper.

This research study reveals there is a widespread acceptance and adoption of metrics, and that quantitative information is playing a vital role in the decision making process. It is also shown that many Service Desks have adopted a variety of industry best practice metrics, but there is a wide range of opinion on which metric is the most important. This can be explained by the fact that every Service Desk is different and seeks a range of information to enable it to make important decisions surrounding resourcing, service delivery and customer satisfaction. The interviews reveal that Service Desk professionals are interested in enhancing and broadening the range of metrics they measure and report on and are keen to produce information of interest to the business: there is a clear desire to engage and work with the business to drive performance forward.

We also see that metrics are changing the way organisations work by making information available to a wider range of people than ever before. It is becoming clear that adoption of business value metrics will form a key component of the transition from traditional Service Desks towards business service centres.

Key Findings

The most important metric to both the Service Desk and the Business: ‘resolved within SLA’

The most important metric to both the Service Desk and the Business: ‘resolved within SLA’

The business wants to see more information on performance against SLAs

The business wants to see more information on performance against SLAs

The most common metric businesses ask for is ‘resolved within SLA’

The most common metric businesses ask for is ‘resolved within SLA’

For most Service Desks, the Service Desk Manager is responsible for producing metrics reports

For most Service Desks, the Service Desk Manager is responsible for producing metrics reports

Service Desk reports are most frequently produced on a monthly basis

Service Desk reports are most frequently produced on a monthly basis

Metrics reports are usually presented to senior management

Metrics reports are usually presented to senior management

Only 28 per cent of customers ask to see metrics reports

Only 28 per cent of customers ask to see metrics reports

53 per cent believe sharing metrics reports has improved the Service Desk’s relationship with the business
  • 53 per cent believe sharing metrics reports has improved the Service Desk’s relationship with the business

50 per cent of businesses make decisions based on metrics
  • 50 per cent of businesses make decisions based on metrics

The most common cost based metric is ‘cost of IT operation’

The most common cost based metric is ‘cost of IT operation’

40 per cent of respondents do not currently measure any business value metrics
  • 40 per cent of respondents do not currently measure any business value metrics

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

3

Part One

Demonstrating Value

1. Which of the following performance metrics do you currently measure? (Choose all that apply)

Metric

%

Number of incidents and service requests

92

% of incidents resolved within SLA

65

Incident resolution time

63

Average time to respond

54

Backlog/open incidents

53

First contact resolution rate

47

Abandon rate

43

Comparison of SLA goals to actuals

42

Average resolution time by priority

25

Average resolution time by incident category

20

Re-opened incident rate

17

Remote control and self-help usage

14

Hierarchical escalations

13

Functional escalations

10

Cost per incident or service request

10

Total cost of ownership

6

Relative cost per incident per channel

4

None

2

The above 17 measures are taken from SDI’s international best practice standard. The standard, created in collaboration with industry experts from across the globe, prescribes the metrics the industry should measure in order to deliver value to the business and drive performance. A comprehensive guide to these 17 measures is included in part four of this white paper.

The results to this question reveal that 92 per cent of Service Desks are currently measuring the number of incidents and service requests received. We would expect this metric to be the most ubiquitous as without it, the Service Desk would be unable to resource effectively or ensure it was operating at its correct capacity. The number of incidents and service requests signifies, in very broad terms, the ‘work’ the Service Desk gets

through on a daily basis – trended over time this metric will reveal if the Service Desk is becoming busier and if more or less resource is needed to respond to customer interactions in a timely

fashion.

Second on the list is percentage of incidents resolved within SLA. This metric is the measure of the Service Desk’s adherence to the contract that exists between the Service Desk and the business. It is this metric that demonstrates whether the Service Desk is delivering on its contractual agreements, and is of great interest to the Service Desk and the business as both have a vested interest. For many Service Desks, this metric – along with customer satisfaction – is their key performance indicator.

Rounding out the top three is incident resolution time with 63 per cent of respondents choosing this option. As with incidents resolved within SLA, this metric enables Service Desks to understand how long they are taking to resolve incidents and thus can make effective decisions surrounding areas such as resourcing, performance, and service improvement amongst others.

“In today’s competitive economy every Service Desk should know its cost base and seek to make itself more effective and productive.”

HOWARD KENDALL, FOUNDER, SDI

Interestingly, all the cost-based metrics feature in the bottom four, with only 10 per cent measuring the cost per call or service request. This low percentage marries with SDI’s own historical research that highlights only a small percentage of Service Desks understand their cost per call or email. Many observers find it implausible that so few Service Desks measure these metrics given that spending and costs are constantly under the financial microscope. The answer for this low figure, as supported by the evidence from our interviewees, is that Service Desks find it very difficult to get a firm handle on their costs, although many would like to have a better understanding. A simple formula for calculating cost-based metrics is included in part four of this white paper.

4

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

2. What one metric is most important to the Service Desk? (Select one only)

Metric

%

Resolved within SLA

24

Customer satisfaction

19

First Time Fix Rate/First Contact Resolution

13

Average Time to Respond

11

Number of incidents

8

Number of open incidents

7

Cost based metrics (cost of operation, cost per call etc.)

6

Average resolution time

5

Availability of services

2

Number of incidents resolved

2

Abandon rate

1

Number of incidents per service

1

Number of incidents escalated to problems

1

As shown, resolved within SLA and customer satisfaction topped the list of the most important metrics for the Service Desk. Some of the comments that accompanied these two metrics are that resolved within SLA was the ultimate assessment of Service Desk performance as it demonstrated the ability to respond effectively and enables users to get back to work quickly and efficiently.

For customer satisfaction, it was considered the combination of all other metrics – if performance was strong across a range of metrics, customer satisfaction therefore would be good as well. As one respondent noted, it is the customer who passes the overall judgement about the service that is provided.

Rounding out the top three is First Time Fix Rate/First Contact Resolution (these are incidents that are resolved whilst speaking to the user and without having to ask for additional help or assistance). One of the comments for this metric is that it was the most important measure for the Service Desk as it depicted the cheapest method of support to the business and is the most cost effective, whilst also ensuring maximum user productivity.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

5

Part One: Demonstrating Value

3. Who is responsible for producing/measuring Service Desk metrics and reporting on them? (Select one only)

Job title

%

Service Desk Manager

29

Multiple individuals

24

Other, please enter role

10

Service Delivery Manager

10

Service Desk Team Leader

9

IT Manager

9

Not one single person

5

Service Desk Analyst

3

Business Relationship Manager

2

The most popular option was Service Desk Manager, which is a result we would expect given it is them who, in many cases, have overall control, visibility and responsibility for the Service Desk operation. However, the results above also show there are often multiple individuals involved in creating metrics reports. This will mostly be where individuals have different areas of responsibility or the reporting function might be too time- consuming for one person on their own. One of the Service Desk’s biggest issues with their Service Desk solution is that it is difficult and laborious to extract the information and data required to produce metrics reports.

The results also show that the responsibility for producing metrics reports fall to people not included in the above categories. For those who selected the ‘other’ option, the job titles include the following:

Customer Support Manager Knowledge and Reporting Manager Awareness and Service Improvement Team Knowledge Manager Customer Service Manager Head of IT Problem and Reporting Analyst IT Contractors Service Support Manager Service Delivery Coordinator Operations Manager

6

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

  • 4. How often do you produce Service Desk metrics reports? (Select one only)

Frequency

%

On an ad-hoc basis

10

Daily

11

Weekly

27

Fortnightly

2

Monthly

46

Quarterly

3

Every 6 months

1

SDI’s Service Desk standards recommend that metrics reports are produced on a monthly basis and disseminated as far and wide throughout the business as possible. The results above show that monthly is the most common frequency, although over a quarter of respondents produce metrics reports on a weekly basis.

Of course, the variety and range of information may vary in the monthly and weekly report and may be shared amongst different groups. For example, the weekly report may just be used within the Service Desk to highlight that week’s performance and where it has hit or missed targets. The monthly report may delve into more detail, include additional and a broader range of metrics, and may be presented to and with a different audience in mind.

As with all metrics, the key is to consider how the reports are used and what decisions can be made on the strength of the available data. Many long-term strategic decisions can only be made with a credible amount of historical data trended over time.

  • 5. Do you communicate/publish your metrics targets?

Every metric the Service Desk records should have a target or goal attached to it. Without a target, it is difficult to accurately gauge performance and trigger alerts if certain metrics are about to breach targets. Targets should be intelligent – they should be reviewed and adjusted in line with Service Desk performance - and aspirational but not to the point that they damage morale if they are consistently missed. Used correctly, targets can be a great motivator and provide something tangible for the whole team to aim for.

The results show that just two-thirds of Service Desks communicate and publish their targets. Targets provide an easy way to assess Service Desk performance and are helpful in making sense of data.

It is troubling that 32 per cent of Service Desks do not communicate metrics targets. The Service Desk needs to be open and visible if it is to become a trusted business partner.

32% 68%
32%
68%

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

7

Part One: Demonstrating Value

  • 6. To whom do you present your Service Desk metrics? (Choose all that apply)

Audience

%

IT Director

46

General senior management

44

Internal customers

30

Just within IT

30

Executive team

28

Just within the Service Desk

27

CIO

21

Internal/external customers

16

External customers

6

It is interesting that just under a third of the overall responses (30 per cent) indicated that metrics reports are only shared within IT and 27 per cent said just within the Service Desk. This raises interesting questions about who the data is produced for and the level of interest audiences have in what is presented. Many Service Desks would say that internal and/or external customers are not interested in seeing Service Desk metrics. This might be true, but Service Desks should consider what information the customer is maybe interested in and how to present that information.

Part three of this white paper looks to the future of Service Desk metrics and examines the metrics that will provide the business with a clear understanding of Service Desk performance and how the Service Desk adds value to the business. For metrics to evolve and for the Service Desk to form tighter business relationships, metrics need to offer information the business, its employees and its customers find genuinely interesting and useful. Sharing this information will help to drive engagement levels, and it will be increasingly important for all parties to have visibility of relevant and important Service Desk information. This will move the Service Desk forward and break down the barriers that currently confine it.

  • 7. Do customers/end users ask to see metrics reports?

28% 72%
28%
72%

Following on from question six, the majority of customers/end users are not interested in seeing Service Desk reports. The reason for this could be because of a general lack of engagement with the user population, or as explained, perhaps the Service Desk does not currently produce information that is useful and/or of interest to its customers.

8

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

  • 8. Sharing metrics with the business has helped to improve the Service Desk's relationship with the business…

2% 10% 15% 35% 38%
2%
10%
15%
35%
38%

In essence, this is the crux of why the Service Desk wants to share information with the business – it is keen to improve its relationship and become a trusted business partner. As shown, by sharing information, 53 per cent of Service Desks believe it has helped to improve the Service Desk’s relationship with the business. This is an encouraging result and demonstrates the benefits that can be realised through communicating metrics in the correct way. It also shows there is a level of demand and interest for metrics from the business and that the business is keen to obtain insight into the Service Desk’s performance. It is vital for Service Desks to constantly and consistently engage to get closer to the business and to create opportunities to share feedback and recommendations.

Also of significance is the 35 per cent who said they are unsure. These respondents are not certain that sharing information is having any discernible benefits. For Service Desks in this position, it is worth considering how information is shared and whether there is any support or guidance available to help the business make sense of it – is the business asking for certain information or is the Service Desk simply providing information it thinks the business will find useful? This is an important question to answer as real improvements in the business/Service Desk relationship will only come when there are clear communication channels and an inherent understanding of the demands from both parties.

  • 9. Does the business make decisions based on the metrics you produce?

50% 50%
50%
50%

A clear divide, and an important result as it demonstrates that only half of businesses are using Service Desk metrics as a basis for decision making. However, it is important to note that the information provided by the Service Desk is helping to improve the decision making process. One of the most important business decisions based on metrics is resourcing. Many Service Desks rely on metrics to help make the business case for more staff or further investment.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

9

Part Two

Metrics and the Business

Part one of this white paper focused on the Service Desk’s ability to demonstrate its value and share this information with the business – part two moves on to the next step, which is to examine metrics from a business perspective. The business may be interested in the same metrics the Service Desk currently measures or may want measurements to help the business increase its understanding and make key decisions.

1. What metrics/information does your business ask you for? (Please check all that apply)

Part Two Metrics and the Business Part one of this white paper focused on the Service
Part Two Metrics and the Business Part one of this white paper focused on the Service

These results show there is a strong business demand for metrics with only 14 per cent stating the business does not ask for any metrics. The chart above identifies that the business is primarily interested in the Service Desk’s performance, with 80 per cent of respondents selecting this option. Second on the list is customer satisfaction, a key performance measure for the Service Desk as it is a true litmus test of the service provided to customers. These results show that the business is interested in understanding what customers think of the service delivered by the Service Desk. Often, KPIs will provide a useful accompaniment to customer satisfaction – if customer satisfaction is low one month, this may correlate with a corresponding increase in call volumes and/or a lower first contact resolution rate.

10

“This will become more and more important as technology is in almost all products and services. Service Desk data shows what outcomes are being implemented and can lead to service improvements.”

HOWARD KENDALL, FOUNDER, SDI

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Two: Metrics and the Business

  • 2. What one metric is most important to the business? (Select one only)

Metric

%

Resolved within SLA

25

Customer satisfaction

19

Cost based metrics (cost of support operation, cost per call etc.)

18

Availability of services

11

Average resolution time

8

Average time to respond

7

Number of incidents/service requests

5

First Time Fix Rate

3

Lost service hours

2

Abandon rate

1

Backlog data

1

This question produced some interesting results. The top two choices are the same the Service Desk chose as the most important metrics to them. Thus, there is a unity and common understanding of the important measures. However, 18 per cent of respondents stated that cost based metrics were the most important measure to the business, whereas for the Service Desk, this was the seventh most popular option. Clearly, the business has a different expectation and looks for different measures to understand the performance of the Service Desk. That being said, it is also true that the business, like the Service Desk, looks at adherence to SLAs and the service delivered to customers as the key measures of Service Desk performance.

  • 3. What information does your business want to see you produce more of? (Please choose the most important option only)

Part Two: Metrics and the Business 2. What one metric is most important to the business?

Clearly there is a demand from the business for performance based metrics, and it is keen to understand if the Service Desk is meeting its agreed SLAs. Following closely behind is customer satisfaction – the business wants to understand if it has a satisfied user population and if there is a good level of service delivered by the Service Desk.

Some comments include:

“Actually, the business is not particularly interested - it is us as a department that are interested in our own performance and define what we believe is good for the business.” “The business is unsure what they want and what their requirements are.” Marketing and sales • Balance of workload • User tips • Can’t get them to tell me • Breakdown of incidents

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

11

Part Two: Metrics and the Business

  • 4. Which of these cost based metrics do you currently measure? (Please check all that apply)

Part Two: Metrics and the Business 4. Which of these cost based metrics do you currently

It is encouraging that 91 per cent of respondents measure cost based metrics in some form. The ‘cost of the IT operation’ is the most popular cost based metric as this is where the Service Desk budget is derived from and includes every cost associated with delivering service. For those that do not measure this cost, we can assume the responsibility for this function sits outside of IT, and it could be the case that the Service Desk does not have visibility of its operating budget.

Correlating closely with results in part one of this white paper, it is shown that only a small percentage of respondents measure cost per email, but many more measure the cost per call. A result of this difference is that it becomes very difficult to assess the most cost effective channel for support. For example, by not measuring cost per channel, the Service Desk would have difficulty justifying investment in new technology such as live chat or social media as would not have a clear understanding if this offered a cheaper way of providing support. This is especially true for Service Desks that look to offer self-service and self-help – until you know the cost for each of your contact channels, it becomes very difficult to create a comprehensive business plan.

  • 5. Which, if any, of these 5 business value metrics do you currently measure? (Please check all that apply)

Part Two: Metrics and the Business 4. Which of these cost based metrics do you currently

The metrics included above move beyond the realm of ‘traditional’ performance metrics – they look at the real value the Service Desk delivers to the business and the cost when IT fails. 37 per cent measure the business impact of IT failure and along with lost IT service hours and lost business hours, these metrics provide a true depiction of the value of the Service Desk operation. It is encouraging that 60 per cent of Service Desks are measuring business value metrics as in the long term, this will help to bridge the gap between the Service Desk and the business as they reach a mutual understanding regarding the metrics that are important and useful to both parties. More discussion of business value metrics is included in part three of this white paper.

12

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Business Value Metrics

Part Three

Traditional metrics have focused on telling the business how good the Service Desk is by detailing the number of calls answered, first time fix rate, incidents resolved within SLA, and a diverse array of many other metrics. Business value metrics should be comprised of any measures that will be beneficial to the business and provide a clear idea of performance and value.

Introducing business metrics can be a huge leap of faith as they move beyond measures of how well the IT department is doing. Business value metrics place IT’s business performance front and centre. Statistics such as how many business hours have been lost due to IT faults can be disconcerting and is perhaps something that most Service Desks are not comfortable sharing. However, it is expected that more and more businesses will want access and visibility to these types of metric as they provide a crucial way to ascertain the value of the Service Desk and its place within the organisation.

Business value metrics also provide real benefit for the Service Desk as they offer tangible data to support and augment business decisions. Justifications for extra budget or resource will be much more robust if supported by business value metrics. The idea is not to use the data to hide or disguise but to use these measures as a platform to introduce future improvements and advancements. It is clear there is a need to move beyond us and them: IT acts as a partner to and enabler for the business. Communicating value and metrics in a way that the business understands is a critical step in improving this relationship. Understand what the business needs in terms of information; ask what information would be useful to it. Establishing answers to these questions is a crucial step in building a bond between the business and the Service Desk.

What are business value metrics?

The metrics below offer some indication of the type of metrics that should be considered when exploring the business value metrics.

Lost IT service hours

This metric is important because it provides the business with a clear indication of for how long IT services were unavailable to the business. This is an example of a metric that provides real value and insight to the business and will provide clear indications of performance. It will also provoke debate and discussion surrounding why hours were lost and what actions can be taken to prevent lost hours in the future.

Lost business hours

This provides a fuller picture of the impact of IT failure. Of course, not all businesses are entirely dependent on IT to function and operate. Like lost service hours, this metric provides the business with a clear understanding of the importance IT plays in its organisation. Lost business hours can then be further scrutinised to ascertain exactly how much revenue was lost due to IT failures.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

13

Part Three: Business Value Metrics

Business Impact

This goes beyond the traditional metrics of saying how good the service provided actually is. By understanding that different areas of the business have different levels of importance and can be affected to a greater or lesser degree by IT failure, the Service Desk starts to create a mature and business focused view on the value it provides to the business. The example table below offers one such way to calculate the relative importance of each area of the business, and in turn, calculates the impact of lost IT availability.

Example: Note – each IT service is weighted according to its business importance/value.

IT Services

Weighting

Lost minutes

Impact Rating

Website (external)

20%

300

6000

Server availability

50%

10

500

Email

15%

200

3000

Intranet

10%

30

300

Telephony

5%

350

1750

As the table shows, the biggest business impact was that the website was unavailable. However, even though the server was only down for 10 minutes across the month, by virtue of it having the highest weighting, it was accountable for an impact of 500. Looking at the business impact of the failure of each service allows you to understand that not all IT services are created equal and some have a much more marked and noticeable impact than others.

Risk of missing SLA targets

One of the traditional metrics included in SDI’s Service Desk Certification performance measures criteria is percentage of incidents fixed within SLA. This is a reactive metric as it looks at past events, and while this hindsight can be useful in planning future improvements, being proactive allows visibility before the event. The risk of missing SLA targets allows the business to prepare for the potential of missed targets and plan accordingly. If SLAs are going to be missed because of change, this can be explained to the business. Some of these changes will be unavoidable, but to strengthen the business and Service Desk relationship, it’s important to show that IT is making the business aware.

Some targets might be missed because of lack of resource. In this instance, this metric becomes not just about sharing information with the business but actually provides a critical opportunity to ask the business for extra resource to try to prevent targets from being missed.

Presenting Business Value Metrics

Presenting metrics in isolation can be misleading and can distort the realities of the service being delivered. The example graph below shows a different story when different aspects of service are compared, enabling the Service Desk to better review service quality and amend processes and procedures based on statistical data, which will contribute to an organisation’s drive for continual service improvement.

Part Three: Business Value Metrics Business Impact This goes beyond the traditional metrics of saying how

14

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Moving Beyond the Basics – Delivering True Business Value

The very first table of this white paper clearly shows the preoccupation of Service Desks to provide information relating to ‘look how good we are’! The focus on providing data to demonstrate business value delivered by IT via the Service Desk is negligible. Obviously it is important to show that IT and the Service Desk is effective and consistent, but ultimately, business management and stakeholders are increasingly interested in the business value provided. If the services provided do not help the business to be more efficient and productive, there is no value to them, and the business will look to alternatives.

Current Service Desk metrics are almost exclusively focused on how IT and the Service Desk are run, that they are efficient and productive. There are increasing requirements to provide metrics that identify information that can contribute to positive business outcomes as this is what senior business executives care about not how quickly the phone was answered.

For example, a leading insurance company has started the journey to deliver business value reporting via the Service Desk, whereby the information they provide clearly shows the monetary implications to specific business units due to systems downtime. Based on agreed business criteria, reports show the impact of lost systems availability and financial implications to the business. With this focus, the company can better analyse how a service is designed and compare how a service is actually operating versus how the business needs it to operate in order for it to meet its business plans.

Each organisation will have individual requirements for business value metrics and reporting, but increasingly, more information is becoming available from which to create a framework. A key issue will be the ease with which it is possible for organisations to collate, extract and present this information. However, base recommendations will be to identify three to five core metrics to measure results relating to services, financials and people across the business, and to deliver information via business value dashboards and reporting that can be easily accessed and consumed by business stakeholders and users.

This initiative is about ensuring that through the Service Desk, IT is better able to present more balanced information about the value of the IT services provided.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

15

Part Four

Metrics Best Practice

This section of the white paper contains a description of SDI’s 17 best practice metrics. These metrics form a part of the performance measures criteria for SDI’s international Service Desk Certification (SDC) standard. This standard, revised every three years, is created by industry professionals from across the globe. It aims to provide Service Desks with a set of standards that will raise the performance of their Service Desk: the performance measures component of the standard highlights the key metrics that should be measured to create a comprehensive analysis of Service Desk activities.

The SDI 17 best practice metrics

1

Number of incidents and service requests

2

Average time to respond

3

Abandon Rate

4

Incident resolution time

5

First Contact Resolution Rate (FCR)

6

Percentage of incidents resolved within Service Level Agreement

7

Re-opened incident rate

8

Backlog Management

9

Hierarchic escalations (management)

10

Functional escalations (re-assignment)

11

Average resolution time by priority

12

Average resolution time by incident category

13

Comparison of SLA goals to actual targets

14

Remote control and self-help monitoring measured against goals

15

Total cost of ownership

16

Relative cost per incident by channel

17

Cost per incident or service request

What do these metrics mean?

1

Number of incidents and service requests

This measures how many incidents or service requests the Service Desk receives. This can also be broken down by channel, i.e. phone, email, live chat, in-person etc.

 

Why it’s important

Measuring the volume of calls enables you to create an effective and robust staffing model; allow you to see when your busy periods are by highlighting peaks and troughs; ensure you have enough resources; and understand through what channels your calls are coming in from.

2

Average time to respond

The standard says: “The Service Desk routinely and consistently collects and analyses the average time it takes to acknowledge an incident or service request by channel or method (phone, e-mail, user-logged, live chat, SMS, fax, etc.)”

Why it’s important

Knowing how long it takes to respond is a key indicator of how well your Service Desk is performing. Working with this metric and breaking down time to respond by analyst or channel will enable you to make improvements and identify training needs.

16

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Four: Metrics Best Practice

3

Abandon Rate

“The Service Desk routinely and consistently collects and analyses data about the percentage of user telephone calls that are terminated prior to establishing contact with an analyst.”

 

Why it’s important

This is one of the most important metrics because this informs you as to the availability of your Service Desk to respond to customers. Understanding the abandon rate will help to inform staffing and resource management, and will allow you to better plan for peaks and troughs.

4

Incident resolution time

“The Service Desk routinely and consistently collects data about the average time taken to resolve incidents and service requests and compares it to the goals/objectives detailed in the Service Level Agreement(s) (SLAs).”

This metric looks at how quickly you resolve incidents and compares these resolution figures to the goals in the SLA.

 

Why it’s important

Understanding the number or percentage of incidents resolved within each priority category offers a clear indication of how your Service Desk is performing against the obligations and agreements you have with your customers.

5

First Contact Resolution Rate (FCR)

“The Service Desk routinely consistently collects and analyses the percentage of incidents and service requests that are resolved to the customer’s satisfaction during the initial call or electronic exchange between end-users and the Service Desk excluding the entitlement procedure.”

This metrics is fundamentally different to first level (or line) fix rate, which concerns incidents resolved at first level (Service Desk) without being escalated to a resolver team (2nd and 3rd line).

 

Why it’s important

Knowing the first time fix rate is important as this will give you an understanding of the competency level of your Analysts and the type and difficulty of the incidents they grapple with.

6

Percentage of incidents resolved within Service Level Agreement

“The Service Desk routinely and consistently collects data about the percentage of incidents and service requests resolved within the timeframes specified in formal service level agreements.”

This measure allows you to understand how you are performing against the agreements you have with your customers. Service Levels are often classed by priority (P1, P2, P3 etc.) with P1 being the highest priority with the lowest agreed time to fix.

 

Why it’s important

Measuring this metric allows you to ascertain whether the priority levels are correct or if they’re unobtainable. For example, if you are consistently breaching priority levels, it could be the case that the agreed resolution times need to be changed, or that you require more resources to make them achievable.

7

Re-opened incident rate

“The Service Desk routinely and consistently collects data about the percentage of closed incidents and service requests subsequently re-opened for additional follow-up.”

This will benefit from some analysis on what incidents have been re-opened to gain a better understanding of why the re-open has occurred.

Why it’s important

Understanding why incidents have been re-opened is important because it identifies if there is a training need to explain why incidents were not closed in a satisfactory way. Examining re-opened incidents also helps to inform the process for closing incidents. If lots of incidents are being re-opened, it suggests they are not being closed correctly – if the reverse is true, it suggests the fixes provided are satisfactory or that incidents are not being re-opened when they should be (it’s being logged as a new incident) or that customers do not have a large enough window to offer their opinion on whether the fix was adequate.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

17

Part Four: Metrics Best Practice

8

Backlog Management

“The Service Desk routinely and consistently collects data about the total number of open incidents or service requests compared to their age.”

It’s worth considering assigning someone to monitor the backlog data to see why calls are still outstanding and how they will be resolved. This is called the TRIAGE process.

 

Why it’s important

Understanding what calls are still open and why is an incredibly useful process as it allows you to identify if calls are being closed correctly; if calls are being escalated correctly; what action needs to be taken to resolve the open incidents; and why these incidents have not been resolved thus far. Backlog data can also identify if there is a lack of resource on the Service Desk.

9

Hierarchic escalations (management)

“The Service Desk routinely and consistently collects data about the percentage of incidents or service requests escalated to management in order to avoid a developing SLA breach.”

Why it’s important

It’s important to measure how many incidents are escalated to management as this will help to identify if there are any training issues. It will also allow you to see how much resource is being taken by management fixing incidents and handling customer complaints and feedback.

10

Functional escalations (re-assignment)

“The Service Desk routinely and consistently collects data about the percentage of incidents, and service or change requests transferred to a technical team with a higher level of expertise in order to avoid an SLA breach developing.”

Functional escalations are distinctly different from hierarchic escalations in that this type of escalation is to another team, and not management. Functional escalations will be incidents that are passed to resolver teams (2nd and 3rd line).

 

Why it’s important

Much like hierarchic escalation, functional escalation enables you to understand the number of incidents passed to the resolver teams, and trends during a period of time. It will enable you to see if training courses have had an effect on the number of escalations to resolver teams, and can be useful in identifying future training needs. By manually looking through some of the incidents escalated, you will begin to understand what incidents most commonly require external assistance and whether training for the 1st line team will be beneficial.

11

Average resolution time by priority

“The Service Desk routinely and consistently collects data about the average length of time taken to resolve incidents analysed by their priority.”

 

Why it’s important

This metric enables you to see if the priority categorisations are correct and if you are meeting your targets on a regular basis. It is important to look carefully at the exceptions to understand why they have breach and what can to done in the future to prevent them from breaching again.

12

Average resolution time by incident category

“The Service Desk routinely and consistently collects data about the average time required to process/resolve a user incident or service request based on incident or service request type.”

This metric looks specifically at incidents that are resolved within the set delineations the Service Desk has decided. These might typically include categories for password resets, e-mail problems, hardware errors, etc. This metric is distinctly different from resolution by priority type.

Why it’s important

Measuring the resolution time by incident category allows you to identify the most common incidents and how quickly they are resolved. It’s important to look at the exceptions to see what incidents have exceeded their goal or target time for resolution. Recording incidents by category also allows you to build a list of the most common incidents your Service Desk attends to. You’ll also be able to see what type of incidents take the most time to resolve and which ones are quick fixes.

18

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Four: Metrics Best Practice

13

Comparison of SLA goals to actual targets

“The Service Desk routinely and consistently collects data about its service level commitments and compares it to its actual performance results.”

 

Why it’s important

Many Service Desks live or die by their performance to SLA targets, and if the wider business is interested in metrics, this is the one it tends to hone in on. From a management perspective, comparing performance against SLA can be invaluable in helping to identify areas for improvement and understanding strengths and weaknesses.

14

Remote control and self-help monitoring measured against goals

“The Service Desk routinely and consistently collects data about the frequency that remote control tools are used and the number of times that self-help tools assist in incident and service request resolution compared against goals.”

This can be a difficult metric to record. There are two ways to tackle this problem. Do some development work to include a ‘flag’ that analysts can tick if they have used remote support. For self help you could have a tick box for users to tick to indicate whether the article/guide/FAQ was useful or not. Developing this further, a rating system could be adopted, allowing users to make comment on the quality and accuracy of the information.

 

Why it’s important

Measuring remote control usage is vital as it provides a real insight into the abilities of your analysts. Also, who is using remote support? What incidents is it most successful at fixing? What customers can remote support – are there some that refuse to allow analysts to connect to their machine in this way?

Measuring who uses remote support identifies any nascent training needs – if it’s not being used, why not? Do analysts know how to use it? These are revealing findings and will help to train and educate your Service Desk.

For self-help, it’s important to understand the effectiveness and quality of the information you have made available as this will help to refine and shape future articles and information. Also, you want to be able to identify if this information is being used and whether more marketing needs to be done to help promote the availability of self-help to the user population.

15

Total cost of ownership

“The Service Desk routinely and consistently collects data about the total support cost of each contact and/or customer.”

Total cost of ownership refers to the total cost of running the Service Desk.

 

Why it’s important

Quite simply, it’s vital to understand how much the Service Desk costs to run. Only through understanding these figures can you discover whether there is the money available for increasing resources or increasing spending in other areas. Measuring, tracking and trending the cost of ownership will enable you to ascertain if your Service Desk has made any cost or efficiency savings.

16

Relative cost per incident by channel

“The Service Desk routinely and consistently collects data about the relative cost of Service Desk operations by channel i.e. telephone, email, live chat, SMS, fax, walk-ins etc.”

A simple method for calculating costs:

Average Cost per call/per e-mail

This, along with cost per e-mail, is the essential metric to grapple if you want to determine the value of your Service Desk, yet, as revealed in part one of this white paper, only 10 per cent of Service Desks measure this metric.

Things to consider:

To give an accurate and fair measurement the cost of second and third line support should be included.

Determine which measures should be incorporated to give the final figure. For example, some intangible measures should be given a weighting and added to the final total such as call waiting time or informal peer support.

You will also need to know your staff costs to get an accurate handle of call costs.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

19

Part Four: Metrics Best Practice

A method for calculating cost per call/per e-mail

Some companies use the actual Service Desk budget to calculate cost per call. In essence, they include every cost involved in running the Service Desk and divide this by the number of calls received. This method is a little too simplistic for what we’re really looking for in the cost per call metric, but it does highlight why a comparison of metrics is so difficult.

Others will include every cost involved in taking the call. They will include postage costs if hardware needs to be replaced to. They might include the cost of using technicians or field agents. Some will include the cost associated with the loss of productivity created by the user being on the telephone. This is why there is such a high variation in the reporting of this metrics. This is a much more involved way of measuring the metric, but it may also be more informative. If a value can be placed on productivity loss, it will be clear how vital the Service Desk is to the operation of the business. If you can report that your desk saved x amount of productivity, this will place your desk in a very strong position.

The Formula

There are lots of different ways this metric can be measured, but here is one of the best, all-encompassing ways:

The all-encompassing way

All costs associated with providing support

÷

(including heating, lighting, rent, salaries, hardware, software etc.)

Then ...

number of

Analysts

=

Cost per Analyst per minute

Cost per Analyst per minute

  • x Time taken to resolve and close an incident

Cost per call

= or email

Explanation

Ultimately, you want to understand your Service Desk staff cost, broken down into as small a unit as possible. Your HR department can tell you all the components you will need to measure this: salary, benefits, heating, lighting, equipment and any other measures that you think should be included. From this data, you can then work out how much an Analyst costs to employ per minute.

Add to this figure the lifetime cost of software support including support and maintenance. You can split the costs over three years to give you some idea of what it actually costs to run the systems.

You might want to add hardware costs and the cost of using second and third line (although of course, you could have Analyst cost per call, second line cost per call etc.)

Adding up the above will give you the cost per call/e-mail per minute, which then needs to be multiplied by the time duration of the call/e-mail.

20

“Service Desks must know what we can add to the business’ potential to enhance and expand further.”

HOWARD KENDALL, FOUNDER, SDI

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Four: Metrics Best Practice

The simple way

Total salaries of Service Desk staff

÷

number

of Analysts

÷

number

of minutes

worked

=

Cost

per Analyst

per minute

Then ...

Cost per Analyst per minute

Explanation

  • x Time taken to resolve and close an incident

Cost per call

= or email

The formula is essentially the same, except the number of included costs is significantly reduced. While this might give you a number that is less accurate, is does not mean that it will be less useful. Ultimately, whatever measure you choose needs to provide you with an indication of whether your costs are going up or down over time. Both of the suggested formulas will provide you with a ‘stake in the ground’ and a useful benchmark from which to analyse your support costs over a period of time.

Comment from Tony Ranson, Independent Consultant Why do service desks find it difficult to measure cost-based metrics?

“I’m not sure Service Desks find it difficult to measure costs, but I think so few do because they have not yet reached that level of maturity: it’s simply the case that they have not thought about costs yet. For many Service Desks, the thought of measuring costs is like a new science, and the approach to measuring costs can be overwhelming. What I would suggest is that service desks approach measuring costs by making a stake in the ground, which is fundamental to see if you are getting better or worse. There is time to finesse and improve over time through continual service improvement.”

17

Cost per incident or service request

“The Service Desk routinely and consistently collects data about the cost per incident and service request of the Service Desk’s operations (including people, support infrastructures, and overheads).”

For this metric, you can use the same formulas as for cost per incident per channel, except in this instance, you’re looking at the total cost of incidents and service requests.

Displaying Metrics Information – SDC best practice guidelines

The graph below is a best practice method for displaying metrics data. The graph should contain data for a 12 month period; have a goal or target line; show the trend of the data; and be presented in a clear, concise and consistent standard format. Metrics should be presented in a format acceptable and clear to the business and adheres to corporate guidelines.

Part Four: Metrics Best Practice The simple way Total salaries of Service Desk staff ÷ number
Part Four: Metrics Best Practice The simple way Total salaries of Service Desk staff ÷ number

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

21

Why do Metrics Matter?

Service Desks need to understand the value they offer to their users and the business. As the survey results show, adherence to SLA targets and customer satisfaction are key metrics for Service Desks, and these metrics are true determinants of business value. The other component of value is the cost aspect. The survey reveals that a large percentage of Service Desks do measure some aspects of their costs and rely on these measures to truly understand the value for money they offer to the business.

Why is it important to understand the value of IT to the business? If you don’t know your value, it’s much harder to justify any staff or budget increases for your Service Desk. If Service Desks are unable to measure and communicate their value, the stigma of IT being a ‘cash drain’ will remain in vogue. Furthermore, with companies tightening their belts, reducing spending on IT might be near the top of the cull list. If companies don’t know the value of IT, it will be more likely that Service Desk budgets will be cut in preference to areas of the business that do provide tangible evidence of their value.

Why do Service Desks find it difficult to establish their business value? Firstly, they find it hard to establish what metrics they should be measuring, how to measure them, and what they should do with the results. Secondly, the majority of Service Desks are concerned with reporting how they spend money, not on determining the value that these expenditures actually provide. The good news is that fixes to these problems are relatively straightforward: with a few simple metrics measurements, the business will have a much greater understanding and appreciation of the Service Desk’s value.

Secondly, the Service Desk – as its name implies – is primarily concerned with delivering a service to its users and customers. One of the ways to achieve this is to manage expectations by setting contracts in place. In doing so, both parties know what they are expected to deliver and when the resolution can be expected. Measuring this will demonstrate if these deadlines are consistently being met, and therefore, if a good service is being delivered. If targets are consistently breached, this indicates there are some key problems that need to be addressed as a matter of urgency – if you are not measuring this data, you will not know if this is the case.

The ultimate answer to the question of ‘Why do metrics matter?’ is that if you don’t know how much your services cost or whether you deliver a good service that meets expectations, you can expect some tricky times ahead. It has never been more important to demonstrate your value from a financial and customer perspective.

22

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Conclusion

It is clear from the survey results and the discussion of business value metrics that metrics is a big Service Desk topic. Lots of debate exists around the most important Service Desk measures and of course, these vary depending on a Service Desk’s structure, goals and the types of organisation and users supported. The commonality is found in the drive to demonstrate value: this explains why adherence to SLAs and customer satisfaction featured so prominently on the list of metrics important to the Service Desk and the business.

Service Desks today look to demonstrate or prove they provide value for money and that they can justify further investment and expansion. Metrics play a crucial role in delivering this message as they offer tangible, empirical evidence that the Service Desk is delivering a quality service. When so much of IT is difficult to define and accurately cost, metrics play a crucial role.

It is also clear that metrics are evolving and maturing away from demonstrating performance to demonstrating core business value. Understanding the role that IT plays in delivering value to the business – in terms of supporting users, ensuring availability and mitigating risk against IT failure – are key considerations, and we can expect businesses to look more and more towards tangible demonstration of these values. It is heartening to find that so many Service Desks measure some metrics and that many of these are considered industry best practice measurements – it is these metrics that will provide the strong foundations for future business value measurements.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

23

Part Five

Interviews

Part Five Interviews Simon Middleyard, Joint Head of Service and Infrastructure, Government Organisation Simon is responsible

Simon Middleyard, Joint Head of Service and Infrastructure, Government Organisation

Simon is responsible for all aspects of IT service and desk side support and has been in post for 18 months. He has a strong service management background having previously been a Service Desk Manager. His Service Desk has experienced significant change in the 18 months that he has been in the role. In this time, they have also implemented SLAs, event surveys and other KPIs. The service desk supports 1000 users.

The key metrics

18 months ago there were no metrics in place, and customers and the organisation had no expectations around the level and quality of support they should receive from the Service Desk. This changed when the Service Desk expanded from one to three Analysts. The Financial Director wanted metrics that would demonstrate that the heavy investment in recruitment was worthwhile: essentially he was interested in understanding if the Service Desk was offering value for money. As a result, Simon and his team implemented SLAs with four priority levels and an event based survey sent to users after every call is closed. Prior to these additions, the service desk did not have any metrics in place. Performance against SLA and customer satisfaction provides the Business Director with a general flavour of the service provided.

Sharing metrics

Simon’s team share metrics on a weekly basis at their Wednesday morning meeting. During this meeting, metrics are discussed, and on a monthly basis, these metrics are passed up to Director level. The Director is primarily interested in if the Service Desk is missing SLA targets as this is their primary measure of Service Desk performance. They also share customer satisfaction results graded on a 1-4 scale.

Cost based metrics

Currently, Simon’s team does not measure any cost based metrics. He thinks it would be difficult to create a calculation as their tool has been developed in-house and does not currently record the information required. He believes that Service Desks find it difficult to measure cost based metrics as often, they do not know where to start. Simon’s team thinks it is able to demonstrate value through adherence to SLAs and customer satisfaction, and these metrics help to justify the investment in additional staff.

Part Five Interviews Simon Middleyard, Joint Head of Service and Infrastructure, Government Organisation Simon is responsible

David Lee, Service Desk Team Leader, Northumbria Healthcare

David has worked on the Service Desk for the past six years. He started as a Service Desk Analyst before moving into desktop support. He became Service Desk Team Leader two and a half years ago. His team consists of six first line Analysts that support 9000 users.

The key metrics

Up until 2011, the service desk only used to measure the number of logged tickets and found that this met many of its reporting objectives as it enabled the desk to understand the volume of work and resources needed. The turning point came when David attended an SDI metrics event in Manchester. Hearing the presenters and attendees experiences of metrics proved to be a real eye-opener for David, and he created a suite of seventy five metrics. The key metrics for David – and the ones that help him understand the service desk’s performance – are total contacts (number of incidents and service requests); average ticket turnaround time; and resolved within SLA.

Sharing metrics

David creates a monthly metric report containing a selection of his seventy five metrics. This report contains graphs and commentary and is distributed amongst the Service Desk team. Beyond this, the Head of Department also receives a monthly report compiled from the data provided by each of the Team Leaders. The author of this report decides which metrics to include, but of greatest interest to the business is the performance against SLAs: indeed David notes the business is more concerned with adherence to SLA than how busy they are and how many tickets they log. David noted that it can be difficult to create metrics reports within their ITSM tool, although he has managed to automate many of the metrics included in his report. Including other metrics would require delving into SQL.

24

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Cost based metrics

Part Five: Interviews

Currently they do not measure any cost based metrics although David is interested in doing so. He would be interested in some additional guidance as formulas he has looked at have been complicated and unwieldy.

Cost based metrics Part Five: Interviews Currently they do not measure any cost based metrics although

Lauren Conrad, Service Desk Coordinator, Lucidica

Lucidica are a managed service provider based in London. They support small companies typically with between 1-50 employees. Established in 1999, Lucidica has grown its business by offering support with a personal touch and strong customer service.

Lauren Conrad is a Service Desk Coordinator and looks after a team of seven Engineers – she has been in the role for the past two years.

The key metrics

Every morning, Lauren looks at the previous day’s open and closed calls and calls not started, which are jobs logged but have had no action taken on them. The open and closed calls allow Lauren to see the backlog and which calls she needs to assign an Engineer. There are two types of SLA, one for contract clients and the other for ad hoc work. Lucidica also measures customer satisfaction by conducting a telephone interview with a random selection of 20 per cent of their closed calls – this allows her to identify if calls have been closed correctly and resolved to the client’s satisfaction.

Sharing metrics

Lucidica does not share metrics with clients as any problem areas are identified through account managers and primary contacts. Clients don’t tend to ask for metrics reports, and the business ethos is around building relationships. However, Lucidica does produce an annual report for every client informing him/her of current problem areas, current SLAs and call volumes.

Cost based metrics

Lucidica do not measure any specific cost based metrics although every Engineer records the time spent on each job and an annual net rate bill is produced – this allows Lucidica to understand the profitability of each client and adjust resources as necessary. This report might also prompt changes to SLAs or highlight clients that are having lots of problems.

Cost based metrics Part Five: Interviews Currently they do not measure any cost based metrics although

Gary Adams, Service Desk Manager, NHS Hertfordshire

Gary has worked on NHS Hertfordshire’s Service Desk for the past nine years and has been Service Desk Manager for eight of those. The Service Desk supports 8,000 users and is open 365 days a year, 7.30 – 22.00. The Service Desk has undergone a remarkable transformation during Gary’s time having progressed from a log and flog desk with a First Time Fix (FTF) rate of 12 per cent to a current rate of more than 60 per cent. The desk has also increased in size from four to fourteen people, and their user population has increased along with their geographical scope.

The key metrics

On a daily basis Gary focuses on FTF rate, First Level Fix rate, SLA performance, Average Speed to Answer, and total points of contact (total number of interactions). The Service Desk is currently averaging 12,000 points of contact per month. For Gary, First Level Fix was a really interesting metric as it enabled them to discover how much the Service Desk could fix without escalating to second or third line.

The business is driven by what the Service Desk says is important. However, the business does focus on response and resolution within SLA and the availability and uptime of hardware.

Cost based metrics

Gary has looked at cost based metrics but has not yet created any concrete measures. Gary thinks creating

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

25

Part Five: Interviews

cost based metrics is difficult but is interested to understand and learn from others who currently have these measures in place. Going forward, Gary is aware that cost based metrics will be increasingly important as his Service Desk will be asked to justify its value and demonstrate it has a clear understanding of costs.

Part Five: Interviews cost based metrics is difficult but is interested to understand and learn from

Fiona Campbell, IT Customer Services Manager, Customer Protection Agency

Fiona’s role covers all elements of internal support and some external support. She manages two desktop engineers, five Help Desk Analysts and one Team Leader.

The key metrics

This was an interesting question for Fiona. Up until last year, Fiona had a strong focus on metrics and looked to measures such as number of incoming contacts, calls handled, average response time, and quality of calls. However, a new IT Director joined the organisation and advocated a different focus and approach and asked Fiona’s team to spend less time on metrics. He believed that as long as the service was working, there was no need to spend time measuring metrics. As a result, Fiona has moved away from metrics and instead relies on customer feedback to assess how effectively her team is delivering support. Fortunately, customers are very vocal and thus provide Fiona’s team with an honest and candid appraisal.

Fiona still measures metrics but does not report on them. She has found this difficult as metrics formed a key component of the appraisal process and one-to-ones. Now, management is performed using ‘gut feel’ rather than empirical data. However, Fiona’s team only has fifteen per cent of its calls outstanding at the end of the work day so this demonstrates to her that they are working effectively. She also receives positive feedback from her customers, which to her, makes the job worthwhile.

Part Five: Interviews cost based metrics is difficult but is interested to understand and learn from

Jason Kearney, Service Delivery Manager, Orbit Services

Jason has been Service Delivery Manager at Orbit for the past 18 months. During this time, he has moved the Service Desk function out of its previous position within the Customer Services team and into its own department. The motivation for this was that the business was reporting bad feedback and lots of calls went unanswered. The Service Desk has around 14 people, including first and second line. His team support around 1,200 users with a large percentage being mobile workers.

The key metrics

Jason’s key metrics are driven by the demands of the business. He focuses on availability, hardware requests (volume and turnaround time) and SLA performance centred around time to restore services, fulfil requests and time to resolve. Their current SLA performance is 90 per cent for incident management and around 95 per cent for request fulfilment. Jason’s team also has a metric for training requests fulfilment with internal training provided within 21 days of the request: they are currently hitting 90 per cent for this measure. A good indicator of the increased performance of the Service Desk following its relocation from the Customer Services was that the First Time Fix rate increased from 1 per cent to 50 per cent.

Sharing metrics

Metrics reports are produced on a monthly basis and are shared with the business. There is also a quarterly IT strategy board meeting during which the business is asked what it wants from IT, and from these discussions, a rolling two year plan is created. A good example of this feedback: the average speed to answer time was increased from 20 seconds to 40 seconds as the business told the Service Desk it was happy to wait longer if there was a better chance of incidents being resolved.

Cost based metrics

Currently, Orbit’s Service Desk does not measure any cost based metrics, but this is definitely something they are looking at in the future.

26

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Five: Interviews

Part Five: Interviews

Richard Haslam, Service Desk Coordinator, Royal College of Physicians

Richard’s Service Desk is comprised of eight people, with two on the first line. They support 450 staff located across thirteen sites. They handle 8,000 interactions a year. Richard has been in his current role for the last three years.

The key metrics

For Richard, the metrics he focuses on are call to logging time (how long does it take for calls to be logged into the system), time to close and performance against SLA. Beyond this, Richard drills deeper and looks for what type of incidents are received and then takes the proactive approach of trying to minimise future occurrence. He accomplishes this by identifying any training needs or if the hardware being used is faulty: by taking this proactive approach, he is looking to minimise the volume of these incidents in the future.

Sharing metrics

Metrics are shared in a monthly report shared with the Head of Operations who then passes it on to the CTO and other senior management. Senior management is interested in seeing if the Service Desk is meeting its SLA targets, and Richard and his team keep on top of the SLAs by monitoring any SLA breaches and following up on the reasons behind the failures. There is a strong development culture, and Richard and his team are constantly looking at ways to improve.

Cost based metrics

Richard has calculated the cost per call and email by dividing the salaries of the team by the time it takes to complete the work. Using this measure, he can calculate the cost of each piece of work and estimate how much can be saved through education and training programmes. This measurement has provided Richard with a stake in the ground so he can see if his support costs are changing over time. It has also helped to move away from the perception that the Service Desk is a resource that can be infinitely consumed.

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

27

About Cherwell Software

About Cherwell Software

About Cherwell Software Cherwell Software is one of the fastest growing IT service management software providers.

Cherwell Software is one of the fastest growing IT service management software providers. It has corporate headquarters in Colorado Springs, Colo.,

U.S.A.; EMEA headquarters in Wootton Bassett, U.K.; and a global network of

expert partners. Cherwell Software is passionate about customer care and is dedicated to creating “innovative technology built upon yesterday values.”

Its award-winning flagship product is Cherwell Service Management™, a fully-integrated service management software solution for IT and technical support professionals with out-of-the-box PinkVERIFY accredited ITIL processes and wizard-driven customisation that allows customers to tailor the tool to match their processes without writing any code. Cherwell Service Management offers unmatched flexibility in hosting and concurrent licensing for low total-cost-of-ownership.

www.cherwell.com

About Cherwell Software Cherwell Software is one of the fastest growing IT service management software providers.

About The Service Desk Institute (SDI)

About Cherwell Software Cherwell Software is one of the fastest growing IT service management software providers.

Founded in 1988 by Howard Kendall, the Service Desk Institute (SDI) is the

  • specialist information and research about the technologies, tools and trends of the industry. It is Europe’s only support network for IT Service Desk professionals, and its 800 organisation members span numerous industries.

About Cherwell Software Cherwell Software is one of the fastest growing IT service management software providers.
About Cherwell Software Cherwell Software is one of the fastest growing IT service management software providers.

leading authority on Service Desk and IT support related issues, providing

Acting as an independent adviser, SDI captures and disseminates creative and innovative ideas for tomorrow's Service Desk and support operation. SDI sets the best practice standards for the IT support industry and is the conduit for delivering knowledge and career enhancing skills to the professional community, through membership, training, conferences, events and its publication SupportWorld magazine. It also offers the opportunity for international recognition of the support centre operation through its globally recognised Service Desk Certification audit programme.

www.servicedeskinstitute.com

About Cherwell Software Cherwell Software is one of the fastest growing IT service management software providers.

About SITS – The Service Desk & IT Support Show

  • charge to attend.
    www.servicedeskshow.com

The show is comprised of an exhibition of the latest products, services and solutions from some of the world’s leading suppliers, and a world-class education programme including over 40 seminars, keynotes, briefings and discussions. With over 4,500 ITSM professional converging, it’s a great networking opportunity and there is no

28

DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

NOTES