Professional Documents
Culture Documents
Workshop overview
This session is designed for professionals who operate in the field of IT performance management and allows them to create a KPI based scorecard using KPIs that actually make sense to the business The following are the objectives for this session:
Understand the principles of successful IT Performance Management Are able to identify what KPIs are relevant to their business Learn to identify quality KPIs Using a Software as a Service environment can configure a scorecard and dashboard Are able to interpret information in a dashboard and use this for effective decision making
This workshop focuses on the process for the creation of an IT KPI based scorecard and will not go into the details of data collection, data quality or data transformation
Workshop Agenda
Session 1 Title The current state of IT Performance management in your organization Current state of the industry IT Performance Management Description Introduction of the participants and Discussion on maturity of IT Performance Management in your organization An overview of the current state of the IT Performance Management industry An overview of the IT Performance Management process, terminology and concepts Defining IT Performance management objectives and identifying key processes relevant to your organization Define requirements for IT Performance Management: objectives, audience, scope Create an IT KPI based Scorecard based on the current state of your organization and your requirements Analysis of a number of Scorecard and Dashboard examples Q and A session and definition of next steps Type Assignment Time 2.00 2.10 PM
2.10 2.30PM
2.30 3.00 PM
4a
How to
Presentation
3.00 3.45
4b
Define / Measure
Individual Exercise
3.45 4.30 PM
Scorecard Design
Presentation and Group Exercise Presentation and Group Discussion Group Discussion
4.30 5.30 PM
5.30 5.45 PM
5.45 6.00 PM
Session 1
Agree
Level 0: IT Performance Management is non-existent. Level 1: There is ad hoc monitoring in isolated areas. Level 2: Some measures are set with a clear link to business goals but are not communicated. Measurement processes emerge, but are not consistently applied. Level 3: Efficiency and effectiveness are measured and communicated and linked to business goals and the IT strategic plan. Continuous improvement is emerging. Level 4: There is an integrated performance measurement system linking IT performance to business goals by global application of a documented framework. Continuous improvement is a way of life.
Reduction of IT cost Establish the progress toward achieving goals Optimal resource allocation Be compliant with internal or external regulations Require an insight into performance against service levels Identify internal improvement opportunities Other
Having support from the business Proper guidance on the implementation Having a formalized service management framework in place (ITIL, COBIT) Having a useful software application to capture and convey performance measures Having an industry standard for metrics (common language) Having the proper resources in place (budget, people, infrastructure) Other. Please specify:
Business Intelligence (e.g. IBM, Hyperion, SAP) Manual reporting (e.g. Excel) Internally developed software solution IT performance solution from external vendor (Metricus, M42 ) IT service management solutions (HP OpenView, BMC Remedy) Service Level Management Monitoring Tools (Digital fuel, Oblicore...) Other. Please specify: We dont use any tools to report IT performance
Session 2
For too many organizations IT is a black box. Projects and systems are so complex that few CIOs can predict a direct impact on the business, making it hard to win budget and resources even in prosperous times. And when the CIO can't get a clear picture of the real-time data that underlies critical applications, infrastructure, and projects, IT too often ends up reacting to issues after users and customers are having problems. (Information Week March 2008)
Source: Information Week 2008 - Hunting the Elusive IT Dashboard Source: Trends in IT Performance Management, 2008/9, ITpreneurs survey and interviews among 99 IT executives and consultants
12
They could, but information is fragmented and dispersed over tools and reports
They do, but a lot of effort goes into consolidating various reports into one view
They have a holistic picture of their overall IT performance thanks to one or few regular concise reports
Collecting data from various sources Converting data into logical numbers Building complex Excel sheets Highly Ineffective
0% Percentage 2009
Source: Trends in IT Performance Management, 2008/9, ITpreneurs Survey and interviews amongst IT executives and consultants
5% 10% 15% 20% 25% 30% 35% 40% 45% Percentage 2008
13
45% 40%
35%
30% 25% 20% 15% 10% 5% 0% Level 0 Level 1 Level 2 Level 3 Level 4 Level 4 Level 3
IT Performance Management is non-existent. There is ad hoc monitoring in isolated areas. Some measures are set with a clear link to business goals but are not communicated. Measurement processes emerge, but are not consistently applied. Efficiency and effectiveness are measured and communicated and linked to business goals and the IT strategic plan. Continuous improvement is emerging. There is an integrated performance measurement system linking IT performance to business goals by global application of a documented framework. Continuous improvement is a way of life.
14
Reduction of IT cost
Optimal resource allocation Require an insight into performance against service levels
Source: Trends in IT Performance Management, 2008/9, ITpreneurs Survey and interviews IT executives and consultants
15
Having a useful software application to capture and convey performance measures Proper guidance on the implementation
Other
0% percentage 2009
10%
20%
30%
40%
50%
60%
70%
80%
90%
percentage 2008
Source: Trends in IT Performance Management, 2008/9, ITpreneurs Survey and interviews amongst IT executives and consultants
16
percentage 2008
Source: Trends in IT Performance Management, 2009, ITpreneurs Survey and interviews IT executives and consultants
17
Session 3
IT Performance Management is about measuring, improving, and demonstrating the value of IT IT Performance Management is the effective combination of methods, metrics, data, and tools that enables organizations to define KPIs that are relevant to them, understand their current performance against predetermined goals, and enables organizations to build on this information, initiate improvement activities, and achieve optimal IT performance in line with business requirements
19
Define
Measure
IT Performance Measurement
Manage
Improve
Performance measurement is the process of assessing progress toward achieving predetermined goals*. Performance Management builds on that process, adding the relevant communication and action on the progress achieved against these predetermined goals.
* Wikipedia
20
Define
Improve
Measure
Measure your IT performance through best practice KPIs and performance data
Manage
Manage the ongoing process and present decision making information to relevant stakeholders in your organization
21
Just like other business departments, IT has to continuously improve and ensure alignment with the business Ultimately the only way for IT management to demonstrate value and control is by defining, measuring and managing IT performance A great idea, but the idea often gets stuck at not being able to successfully measure IT performance and not being able to bring everything together into a view that allows the IT management to take informed IT decisions.
22
Business Strategy drives IT strategy and results in defining, managing and optimizing IT processes and activities This needs to be balanced against the availability of quality data and processes that can provide information from the bottom up
drives
IT Strategy
drives
IT Processes
drives
IT Activities
IT Performance Analysis
23
Business Strategy
Business Leadership
IT Balanced Scorecard
drives
IT Strategy
IT management
IT departments
24
Operational Data
Measures
KPIs
Scorecards
Dashboards
Decisions!
IT Balanced Scorecard
25
ODS examples
Service Desk
Manual
Custom
e.g. FrontRange HEAT, HP Service Manager, ServiceCenter, CA Unicenter, Service Plus, Servicedesk, Numara Footprint / Trackit, In house applications, Open Source applications
26
Data Management
Business objectives change, which has an impact on the requirements for data. Thus keep in mind that this process is dynamic and requires continual management
Data Change IT processes and activities Integration
IT strategy
Data Warehouse
Insight
27
Measures
Measures are distinct sets of data derived from mathematical calculations.
Measures are quantifiable, for example, size, volume, or percentage, and involve aggregation of data elements, for example, sum, average, min, max, or count. In and of themselves, measures may or may not be meaningful. However, measures represent building blocks for the metrics required to make business decisions. Examples:
Calculation
Sum (incidents) Sum (Incidents Resolved) where Met Target = Y Average (Database Availability) Sum (IT Resources) where Trained = Y
Measure
Volume of incidents Volume of incidents resolved within target resolution time % Database Availability Number of IT resources trained
28
Metrics
Metrics consist of one or more measures combined with a mathematical calculation and a standard presentation (format) for the output.
Dimensions
Time Daily Categorization Priority = Critical
functional categorization
Measures
Volume of Incidents Resolved Within Target Resolution Time
division
facilitates comparison
Volume of Incidents
Metrics are associated with two dimensions: a time dimension and a functional categorization dimension. Metrics are used in the quantitative and periodic assessment of a process that is to be measured. Metrics should be associated with targets that are set based on specific business objectives. Metrics are associated with procedures to determine the measures required and procedures for the interpretation of metric results.
29
Impact related - Key Performance Indicator (KPI), Critical Success Factor (CSF), Outcome Measure, Performance Indicator Operational related IT Service Level Agreement SLA, IT Operational Level Agreement (OLA), and Service Level Objective (SLO) Term Metric often used interchangeably with Target, Benchmark and Goal
CSF
KPI
IT Performance Metric
Clear definitions are required in order to facilitate the necessary communications, and to set appropriate expectations, with users.
OLO
SLA
SLO
30
31
Metric Presentation
Concept
Dashboard Scorecard
Description
A dashboard is a graphical display of the status of a selected set of key metrics Scorecards are the consolidated tabular and graphical display of sets of metrics related to particular business functions. Visualization reporting solutions show relationships between selected metrics and assist in performing impact analysis Online interactive analysis of data providing slice and dice and drill-down, drill-through capabilities. Known as OLAP (Online Analytical Processing). Individual reports related to specific data requirements. Reports available directly from operational systems used to support processes
Visualization
Monitoring
32
Metric Presentation
In theory various users in the organization have access to information that is timely and relevant only to them
Concept
Dashboard Scorecard Visualization Interactive Data Analysis
Executive Managers
Functional Managers
Scheduled Reports
Process Managers
Monitoring
33
Metric Presentation
In practice many IT managers want to have access to very detailed reports. Also, scorecards or interactive data reports are only sporadically used
Concept
Dashboard Scorecard Visualization Interactive Data Analysis
Executive Managers
Functional Managers
Scheduled Reports
Process Managers
Monitoring
34
Dashboards
A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance Intelligent Enterprise, 2004
Highly visual, data aggregated from various sources Provide a one glance view on the current state of the organization/process or activities. Typically used by CIOs, IT Management and business users.
Green IT dashboard
35
Scorecards
Scorecards focus on the consolidated presentation of metrics and presents an accurate view of the here and now compared with predefined goals
Service Desk Scorecard
Graphical representation of trends and alignment with defined targets is provided. They are supported by static and interactive reports, as well as diagrammatic representations of metric performance and linkages between metrics. Categorization of metrics within a scorecard is typically related to functional or process views.
36
Session 4
Author: Victor Basili in the early 80s (NASA Goddard Space Flight Center)
Measurement object: software projects Application: GQM is one of the most well-known and used measurement approaches for establishing a measurement program
38
Process Effectiveness
Incident
Reduce MTTR Reduce MTBF
Problem
Decrease days to root cause determination Decrease days to implement permanent fixes
Change
Decrease problems due to change Decrease issues during change implementation
Process Governance
Complete
Are mandatory fields filled out
Detailed
Do text fields contain appropriate level of details
39
40
Business strategy
IT strategy
42
Method 3) COBIT
Aligning business goals with IT goals using COBIT, example for DS5
43
Assignment
We are going to use the following model from COBIT to define a process based KPI scorecard
Business Goals
IT Goals
IT process goals
IT management
IT departments
IT KPIs
Step 5 Scorecard
Step 4 KPIs
44
Business goal
IT goals
Processes
45
Case description
At the Coffee Company, coffee is not merely a bean or a beverage. It is the furthest thing from a product or a commodity. Coffee involves relationships and responsibilities. It is a process involving high standards and tough decisions. More than anything, coffee has the connective force to enrich peoples lives. The Coffee Company is a gourmet coffee company, serving highquality coffee blends and a specialty selection of coffee by the cup. Espresso and related food and beverage products complement the offerings. Along with hot and cold liquid servings, the Coffee Company offers gourmet beans in retail bags through a retail channel as well as an online channel. All offerings are provided in an atmosphere that whispers of the warmth and convenience of your living room. From 1991 to date, the Coffee Company experienced and realized significant growth in size, revenue, and number of stores. From only one store in downtown Toronto, the company has grown to 12,000 retail stores and 1 online store. the current economic situation has made the Coffee Company shift its focus from expansion to consolidating business with its existing stores and focus on ensuring the uninterrupted availability of its coffee products and services.
Roasting
Distribution
Retail stores
customers
After roasting, coffee is packaged and sent to one of our five regional distribution centers in Vancouver, Rotterdam, Singapore, London and Houston. The distribution centers supply our retail stores manage the distribution from the regional internet sales.
Online store
Un-roasted 'green' beans are shipped to Rotterdam in the Netherlands and Vancouver, Canada from locations as close as Hawaii and as far away as Indonesia. Varietals are then roasted or mixed with other beans to create blends, such as our popular VOC or Da Vinci blends.
Our outlets are company owned and franchise locations worldwide. The distribution centers supply our stores twice a week or more depending on demand. Each retailer is strongly connected to the community as well to ensure delivery of fresh products such as breads, bagels, etc.
I am the incident manager, and I am proud of closing incidents within targets by my first level support team
Our CMDB is not very easy to use and as a result its not kept up to date always
A change weekend is always exciting for the IT organization.. Will there by any store with problems with their online systems or does everything go well? In case of issues, the challenge is to find out what went wrong and how to revert. Testing plans are not maintained centrally and there is no review of implemented changes
Change management is a rush job, and our change manager agrees to all requests from the business as they feel everything is important.
It seems that we keep reopening incidents as our solutions dont seem to work, whats going on. We need a knowledge base..
Session 5
2b. We dont know if its possible but thats not our problem, we need the sale.
CIO/Executive Management
Consulting Middlemen
There is a gap between theory and practice, and there are not that many metric practitioners out there What looks and sounds nice is often not practical and possible. Dont be put in a situation with middlemen. Work direct with the users of KPIs to ensure that what is requested is actually feasible.
50
51
The cost for collecting data for a metric should weigh up against the insight that you are retrieving from the metric. The cost categories are data collection, Business Intelligence, and report development. Costs are predominantly qualitative whilst benefits are both qualitative and quantitative.
52
IT metrics should be developed and presented with the same rigor as financial accounting metrics. Consider: Integrity of data, consistency with prior periods reporting, materiality (Value of data must exceed cost of reporting)
High
Quality of IT Metrics
53
Focus on consolidating the data required to support metrics and scorecards. Dont assume one approach fits all for data integration.
Email Word Excel PowerPoint Face to Face Telephone Meetings Memos/Letters Help Desk Monitoring Systems
Build a 'data resource network, including developers, support and operations (DBAs), and application SMEs.
Unstructured
Structured
54
KPIs are put in place to be used to manage and too many KPIs make managing to them difficult For every KPI processes need to be in place to collect data for every KPI and do something with the results of the KPI. Start small, keep it simple, and build upon achieved successes. Recognize that metrics will change over time based on the changing values required from the metrics and the growing maturity of the organization
55
Audience
Executive Senior Management
Rationale
Implementing metrics and scorecards solutions is a strategic initiative that requires C-level sponsorship (CIO, CEO). This is generally recognized as the #1 Critical Success Factor to incorporate any Performance Management Initiative or implementing Business Intelligence software The people that really matter to a process must always be part of it. This is often a group so small that you can call the members out by name and count them on one hand. Key people need to be identified for requirements gathering, ownership of metrics and reports, assistance with developing metric/scorecard solutions, assistance in architecting and configuring the infrastructure required, and support for data collection and metrics presentation. Performance Management software solutions can provide guidance and templates for scorecards and reports for specific users in your organization Ask users what they need! Collect key information for users. Map metrics, scorecards, and reports to users. Develop an authorization and application security model. Keep it simple! Start on the premise of allowing users to see all information and selectively revoke rights where appropriate.
56
Metric Requirements - SMART Now we have identified the Processes that have our key interest and that are important for us when we want to meet the objectives of the business. The next step is the identification of specific KPIs for the selected processes. First some guidelines around metrics:
Metrics that are used need to be SMART: Specific
Measurable
Action oriented Realistic Time bound
57
Metric Requirements - SMART The definition of an IT Performance Metric needs to be structured in alignment with defined attributes and valid measures
This helps avoid the 100,000 feet syndrome of documenting wish lists of what sound like nice metrics to have, but in reality are not meaningful, not possible, too costly, and so on.
Good Example
[% Calls Abandoned]. A fundamental Call Center/Service Desk metric, this divides the number of calls offered by the number of calls abandoned. Both measures are readily available and attributes such as time dimension, categorization fields, targets, user base, and so on are straightforward to obtain. (To be discussed more in the section IT Performance Metrics Attributes.
Poor Example
Time used to resolve unavailable services. What services are included? What is the definition of unavailable? How is unavailable time equated with effort to resolve? Is it linked through incidents? If so, are the processes in place to record time?
58
Metric Requirements - SMART IT Performance Metrics need to be quantitative, allowing users to actually measure progress and performance against the IT Performance Metric .
The provision of an IT KPI should prompt the users to ask questions. Supporting data needs to be directly available and/or mechanisms available to obtain support data. Typically, quantitative metrics are supportable while qualitative metrics are hard to support.
Good Example [SRs Open] is a valuable IT service metric but needs to be supported by details of open service requests.
Poor Example % Projects with Predefined Benefit is often based on after the fact guestimates as to whether predefined benefits were available. For the metric to be of value, information on what predefined project benefits need to be stored in conjunction with baseline project details.
59
Metric Requirements - SMART An IT Performance Metric should contain information that can be directly acted upon.
This could involve questions being asked as to why a metric is a specific value, providing information on achieving defined levels of service, or automatically instigating remediation action within a particular IT service process.
Good Example [% Service Requests Resolved On Initial Contact] is often a metric contained within a Service Level Agreement (SLA). Failing to meet a defined contractual target, say 85% will require action to be taken in order to avoid possible penalties.
Poor Example Incidents Created While this is often interesting to look at, particularly across a period of time, and it is certainly easy to measure, its hard to make an educated decision based solely on the results in isolation. This is actually a measure that needs to be categorized with IT performance reference data, such as service or classification, in order to be of value.
60
Metric Requirements - SMART An IT Performance Metric should be realistic from a data perspective, that is, the data associated with underlying measures in the calculation of the metric needs to be available.
An IT Performance Metric also has to be justifiable from the initial and ongoing costs. The effort in collecting the metric logically is lower than the value derived from decisions related to the metric.
Good Example [% Incidents Escalated] The volume of incidents and an indicator if an incident was escalated should be readily available from the operational data source associated with Incident Management.
Poor Example % of Problems Recorded and Tracked It is difficult to detect that a problem is not recorded or tracked unless it actually has been recorded and tracked!
61
Metric Requirements - SMART An IT Performance Metric should be collected at regular time intervals, that is, have an associated time dimension.
This may be hourly, weekly, monthly, quarterly, and so on. Metrics without a time dimension should be referred to as milestones.
Good Example [% Incidents Caused by Changes] is a metric that should be analyzed over time to ensure a downward trend. Data should be available every day.
Milestones % of Data Elements Contained within Enterprise Data Model. The creation of an Enterprise Data Model is a largely investigative activity aimed at understanding what data elements exist. Consequently, the denominator for this metric, the number of data elements, cannot be determined until an Enterprise Data Model actually exists. As a result, this should be an IT Performance Milestone Enterprise Data Model Complete. .
62
Assignment
Based on identified processes, we are going to look at and select a number of IT KPIs and create a scorecard for this:
Business Goals
IT Goals
IT process goals
IT management
IT departments
IT KPIs
Step 4 KPIs
63
Assignment
Step 1: identify the most relevant KPIs for Change, Service desk / incident management
Step 2: write down which KPIs are most relevant for your IT KPI based scorecard and why
Process name: Metric Importance (1-10) Rationale (why)
64
65
% Databases Backup/Restore The % of databases that are scheduled for backup for which backup testing and verification of restoration has been performed Verification % Databases Scheduled for Backup % Server Backup Failures
The % of databases that required backup that have successfully been schedule for backup.
The % of total server backups that failed during a selected time period. Failure is defined by the non completion of a backup. This includes both scheduled backup's that did not start and backups that failed during the backup process. % Server Restoration Failures The % of server backup restoration attempts that failed during a selected time period
The % of total servers that have had a full backup and restoration tested and validated. The % of services in an IT infrastructure that are scheduled for regular backups. The nature of backups will vary across servers based on the software and applications hosted on a server. This metric looks at all servers and provides an enterprise IT perspective. The average number of training days for IT resources over the previous 12 months
The number of projects completed that were completed within the target end date.
Projects Completed Within Budget The number of projects completed where the total actual costs were less than the total budgeted costs. Projects Completed on Time and Within Budget Projects Open with Milestones Missed Projects Scheduled Projects Open The number of projects that were completed by the target completion date and had total actual costs less than budgeted costs. The number of projects currently open where one or more milestones related to the project have not been met. The number of projects at a given point in time that are scheduled to be started in the future. Scheduled projects are those that have been created, subsequently approved, but not yet commenced. The number of projects that are currently open, that is, implementation or development has started but has not been completed. The number of projects marked as completed during a specified timeframe. The definition of completed is that the intended functionality to be provided by the project has been accepted by the users of the functionality.
Projects Completed
Projects Championed by Business The number of IT projects created were initiated from the business and that have funding and full support from the business. Projects Completed with A measure of the number of projects that have appropriate documentation associated with implementation tasks, Documentation and Testing Plans such as testing and post-implementation tasks, such as support and training. Projects Created % Projects Related to Business Projects - Predefined Benefit The number of new projects created. Projects should not be created if they are not directly related to specific IT or business objectives. The combination of [% Projects Related to IT] and [% Projects Related to Business] should be 100%. The number of projects created that have specified defined tangible benefits to the business or IT if successfully implemented.
% IT Processes Reviewed by QA The % of IT processes that have been reviewed by conducting a formal QA
% IT Resources - Quality Training The % of IT resources who have received IT Quality related training within the previous 12 months. The % of IT projects that meet the QQ objectives that were defined
% Projects Receiving QA Review The % of IT projects that have been reviewed by conducting a formal QA % Stakeholders Understanding IT The % of stakeholders that understand the IT policy Policy
The percentage of incidents that can be appointed to inadequacies in the documentation made available with new or changed applications
% SRs Resulting from Inadequate The % of SRs created due to the requestor not having sufficient training in functionality related to an IT service e.g. configuration of new email accounts in Outlook or accessing functionality within an ERP system or on the Training corporate portal. The % of service requests created as a result of lack of documentation related to an IT service. For example, a % SRs Resulting from Lack of user calls the service desk because they cannot find information on the corporate portal on how to connect to the Documentation instant messaging system. The % of stakeholders that attended training for a new application after release % Training Attendance New Applications Average Change Documentation Update Time Customer Satisfaction Training/Documentation Average time it takes for documentation to be updated after a change The average customer satisfaction score for IT Training/Documentation surveys sent in a selected period. This is representative of the overall satisfaction and confident that the business has in IT's ability to delivery training and IT related documentation required by the business to use IT services. A good range of values for measurement is 0 to 10 The volume of incidents that can be appointed to inadequacies in the documentation made available with new or changed applications The volume of Service Requests that can be appointed to inadequacies in the training that was presented to stakeholders after release of the new application or after a change made to the applicatoin
The number of requests for change (RFC's) created. In IT Service Support, a change is the addition, modification, or removal of approved, supported or baseline hardware or software components. This can include network, application, environment, and system components, or other IT components, including documentation. All changes should relate to a configuration item. The % of critical priority changes implemented within an agreed target time. The average length of time required to implement requests for change.
% Changes Implemented within Target - Critical The % of critical priority changes implemented within an agreed target time. % Changes Implemented within Target - High
% Changes Implemented within Target - Medium The % of medium priority changes implemented within an agreed target time. Average Cost per Change % changes failed % emergency changes Average cost per change
The % of changes that failed during the implementation phase of Change Management. The % of changes implemented that classified as emergency changes. Emergency changes are those which require circumvention of routine change management processes due to the urgency of business requirements and changes to the IT infrastructure. A comprehensive Change Management process will include a process for handling Emergency Changes The % of changes created where the risk of incidents occurring within the IT infrastructure is high. A comprehensive Change Management process will include a process for handling High Risk Changes The % of change requests that are analyzed by IT and subsequently rejected.
The % of changes that were made as a result of incorrect information provided by the CMDB The % changes that received feedback after completion of the implementatoin The % of changes that were reviewed post implementatoin The % of changes that correctly followed the defined change management process The % of changes that were audited post implementation
% high risk changes % changes rejected % changes due to CMBD issues % changes post implementation feedback % changes post implementation review % changes process compliance % changes audited % changes causing incidents % changes causing problems
% changes implemented without a back out plan The % of changes that were implemented without a defined back out plan
The % of changes implemented that did not go through the formal testing phase incorporated within the Change Management process. % changes requiring scheduled outages The% of changes that require a service outage to be scheduled in order for implementation of the change to occur.
% changes specified inaccurately The % of changes that were not specified correctly
The % of changes that were closed but contained 1 or more incorrect data components e.g. wrong categorization fields entered, incorrect history, timestamps entered incorrectly, missing solution or closure description, etc.
The % of changes that were incorporated without formal sign off
% Calls Abandoned
Calls Answered
Avg Daily Incidents Handled per Relates to the calls that Service Desks can handle and is an important metric relevant to the planning of Service Service Desk Agent Desk resources.
% Incidents Reopened
% Incidents Dispatched to Level 3 % Incidents Resolved by Workaround % Incidents Dispatched % Incidents Void
This metric indicates how responsive IT support staff are at accepting ownership of incidents. Increasing trends can indicate issues with the process of service request ownership. Hourly or daily spikes can indicate staff rostering issues. Over time, the [Incidents - Dispatch to Own Duration] should decrease, but increase relative to the [Incidents - Create to Dispatch] duration. This will result in the [Incidents - Create to Own] duration decreasing. % Incidents Owned within Target Prompt ownership of critical priority incidents indicates efficient service support processes and increases the probability that incidents will Critical be resolved within the target time. Ownership of critical priority incidents within target has increased importance due to the customer visibility into processes within IT, that is, because of the potential business of critical priority service requests, the customer will have high expectations regarding ownership and subsequent resolution. Incidents created The volume of incidents that are created st % incidents resolved by 1 level The % of incidents that were resolved by the first level support team % incidents resolved within target critical The % of critical incidents resolved within the defined target time. Also referred to as 'Resolution Met' or TTR Met' (Time to Resolve Met). A typical target time for resolution of critical priority incidents is 2 hours.
The % of high priority incidents resolved within the defined target time. Also referred to as 'Resolution Met' or TTR Met' (Time to Resolve Met). A typical target time for resolution of critical priority incidents is 2 hours. The % of low priority incidents that were owned within the defined target time. Also referred to as 'Ownership Met, TTO Met' (Time to Own Met), or 'Accept Met'. A typical target time for ownership of low incidents is 4 hours The % of incidents caused by the implementation of a change. Refer [Changes - Created] The % of incidents that were caused by data errors within the configuration management database.
The % of incidents that can be linked to errors made during testing
% incidents caused by CMDB issues % incidents linked to testing errors % incidents auto generated % incidents resulting from inadequate documentation
The % of incidents that were resolved by access to information available regarding known errors. This information is typically stored in a 'Known Error' or Knowledge Management database. The source of the information should be Problem Management processes where known errors are identified.
Problems Created
% Proactive Problems
Problems Open % Problems Completed within Target - Critical % Problems Root Cause Identified % Problems RFC Created
Session 6
78
Trending Trending information to analyze KPIs in detail Settings Manage definition, set target, tolerance, data criteria
79
Drill down into the details Benchmark different service desk sites across the world and drill down into the details of available data
80
Add your own KPIs Simply add your own KPIs or modify KPIs to the Module and publish them to a dashboard
Analyze to the greatest extent possible Dynamic charts allow you to analyze data and create charts that are most relevant to you
81
Comprehensive Dashboards A typical Metricus Dashboard Template that provides a comprehensive picture on your current IT performance and historical information. These two elements together allow you to take adequate decisions for the future
82
Manage KPI categories For all areas in the IT balanced scorecard KPIs are available. We categorized them according to customer feedback, but you can create your own sub categories
83
ITIL Service Lifecycle Crown Copyright 2007 Reproduced under license from OGC
84
The 7-Step Improvement Process Crown Copyright 2007 Reproduced under license from OGC
85
Weaknesses
A process is not implemented until measured
The ITIL publications are limited in the specification of KPIs to be used. There is not a lot of consistency in the definition of KPIs and information on KPI utilization between the various ITIL domains. The Continual Service Improvement is intended to be used throughout all phases of the service lifecycle, however in reality is only considered late in most implementation projects.
Justification for an ITIL project requires a measured baseline and an improvement target
Without metrics, an ITIL project will soon lose steam and eventually fail
ITIL Service Lifecycle Crown Copyright 2007 Reproduced under license from OGC
86
ISO/IEC 20000 promotes the adoption of an integrated process approach to effectively deliver managed services to meet business and customer requirements.
Adopting ISO/IEC 20000 formalizes the measurement component of IT processes for organizations because they have to demonstrate and attest control over IT processes. This includes requirements such as A process should be in place to identify, measure, report, and manage improvement activities, and Reports shall be produced to meet customer needs, including trend information, satisfaction analysis, and so on.
Control Processes
Configuration Management Change Management Release Processes Resolution Processes
Release Management Incident Management Problem Management Business Relationship Management
Relationship Processes
Supplier Management
87
Strengths
ISO/IEC 20000 formalizes the need for well-defined and thought-through process adoption. There are many documentation requirements in ISO/IEC 20000 and, as a result, the chances of finding the data required for effective IT Performance Management are likely to be high. A specific process called service reporting dictates reporting requirements.
Weaknesses
There are no KPIs in ISO/IEC 20000. Even though ISO/IEC 20000 formalizes many of the operational IT processes, it does not provide guidance on good KPI selection and adoption.
88
Thank you
Contact details:
Arjan Woertman Arjan.woertman@itpreneurs.com +31 (0)10 71 10 260
insert photo
89