Professional Documents
Culture Documents
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
DS3000 Entry-level
DS5000 Midrange
DS6000
XIV Enterprise
DS8000
Subsystem performance:
Note: Results as of 6-26-2006. Source of information from Engenio and not confirmed by IBM. Performance results achieved under ideal circumstances in a benchmark test environment. Actual customer results will vary based on configuration and infrastructure components.
The number of drives used for MB/s performance does not reflect an optimized test config. The number of drives required could be lower/higher.
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Note: Results as of 6-26-2006. Source of information from Engenio and not confirmed by IBM. Performance results achieved under ideal circumstances in a benchmark test environment. Actual customer results will vary based on configuration and infrastructure components.
Drives were short-stroked to optimize for IOPs performance. Real-life may take more drives to achieve the numbers listed..
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
http://www.storageperformance.org
7 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Application I/O performance: Efficient memory usage is key! Access to memory is >10000 times faster than disk access!
Application
File Systems
Volume Manager Device Drivers
Application Software Software
M E M O R y
S E R V E R
SAN
SCSI
Hardware
C A C H E
S T O R A G E
Hardware Setup
System Software
9 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
CPU
SLOW
10
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
I/O
I/O
Transaction Processing
Batch Jobs
14
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
average I/O request size (average I/O transfer size or block size, e.g. 8kB for Oracle DB, 64kB or larger for streaming applications, 256kB for TSM)
15
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
16
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Average Seek Time [ms] (head movement to required track) Rotational Latency [ms] (disk platter spinning until the first sector addressed passes under the r/w heads; avg. time = half a rotation) Transfer Time [ms] (read/write data sectors, 1 sector = 512 Byte)
Start
Seek Time
17
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Transfer Time
= 1000 sectors sector size / avg. Transfer Rate [ms] (typically << 1ms for small I/O request sizes < 16kB)
2010 IBM Corporation
18
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
This is just an example for getting a view on typical disk drive characteristics. The chosen disk types above do not necessarily represent the characteristics of the disk drive modules used in IBM System Storage systems. Source: www.seagate.com (2008)
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
19
2010-09-13
Disk Drive FC
146GB15k
FC 146GB10k SATA2 500GB7.2k
Rules of Thumb - Random IOps/HDD (conservative estimate to start with): FC 15k DDM : FC 10k DDM : SATA2 7.2k DDM:
20 2010-09-13
A single disk drive is only capable of processing a limited number of I/O operations per second!
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Disk Drive: Introduce Command Queuing and Re-Ordering of I/Os SATA: NCQ (Native Command Queuing) Seek latency optimization SCSI: TCQ (Tagged Command Queuing) Disk Drive Usage: 'Short Stroking' of HDDs Disk Subsystem: Subsystem Cache Caching / Cache Hits Intelligent Cache Page Replacement & Prefetching Algorithms Standard: LRU (least recently used) / LFU (least frequently used) IBM System Storage DS8000 - Advanced Caching Algorithms 2004 ARC (Adaptive Replacement Cache) 2007 AMP (Adaptive Multi-stream Prefetching) 2009 IWC (Intelligent Write Caching)
IBM Almaden Research Center - Storage Systems Caching Technologies http://www.almaden.ibm.com/storagesystems/projects/arc/technologies/
21 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Tagged Command Queuing (TCQ, SCSI-2) & Native Command Queuing (NCQ, SATA2) further improves disk drive random access performance by re-ordering the I/O commands so that workloads can experience seek times which are considerably less than the nominal seek times Queue Depth: SATA2 (NCQ): 32 in-flight commands, SCSI (TCQ): 2^64 in-flight commands
22 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Disk Subsystem Cache Read Cache Hits Write Cache Hits / Write behind Sequential Prefetch Algorithms Intelligent Cache Page Replacement & Prefetch Algorithms What data should be stored in cache based upon the recent access and frequency needs of the hosts (LRU/LFU)? Determine what data in cache can be removed to accommodate newer data. Predictive algorithms to anticipate data prior to a host request and loading it into cache.
24
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Disk Drive FC
Speed
146GB15k
FC 146GB10k SATA2 500GB7.2k
15000 rpm
10000 rpm 7200 rpm
2 ms
3 ms 4.2 ms
4 ms
5 ms 9 ms
4/3 ms
5/3 ms 9/3 ms
300
214 138
Even with reduced average seek times you cannot expect more than a few hundred random I/O operations per second from a single HDD. So a single HDD can only process a limited number of random IOps with average access times in the typical range of 5...15ms due to the mechanical delays associated with spinning disks (HDDs).
25 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Storage Disk Subsystem Typical I/O Rate & Response Time Relation
Response Time versus I/O Rate
30 25 20 15 10 5
26
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
27
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Performance:
number and speed of disk drives (spindles) to meet IOps requirements high no. of fast, low capacity drives required to meet performance needs
Cost:
GB
Performance
no. of drives
Capacity
drive capacity
IOps
higher
lower
COST
28 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
146GB15k drives are an excellent trade off between performance and capacity needs
2010 IBM Corporation
SATA
1120 IOps
1022 GB 105 W
29 2010-09-13
75 IOps
1000 GB 9.8 W
1050 IOps
14000 GB (!) 137.2 W
2010 IBM Corporation
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
2005 0.7
IOps = Access Density GB [IOps/GB]
cold data
2010 IBM Corporation
Access Density is a measure of I/O throughput per unit of usable storage capacity (backstore). The primary use of access density is to identify a range on a response time curves to give the typical response time expected by the average customer, based on the amount of total usable storage in their environment. The average industry value for access density in the year 2005 is thought to be approximately 0.7 I/Os per second per GB. Year-to-year industry data is incomplete, but the value has been decreasing as companies acquire usable storage faster than they access it.
30 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
100%
FC 300GB15k
80%
40%
20%
0. 01 0. 10 0. 20 0. 30 0. 40 0. 50 0. 60 0. 70 0. 80 0. 90 1. 00 1. 10 1. 20 1. 30 1. 40 1. 50 1. 60 1. 70 1. 80 1. 90 2. 00
Access Density [IOps/GB]
FC 146GB15k
60%
cold
data
FC 73GB15k
hot
up to 80%
SATA 7.2k
random workloads:
SATA drive transaction performance is considerably below FC drives and their use in environments with critical online transaction workloads and lowest response times is not generally recommended! SATA drives typically are very well suited for various fixed content, data archival, reference data, and near-line applications that require large amounts of data at low cost, e.g. bandwidth / streaming applications, audio/video streaming, surveillance data, seismic data, medical imaging or secondary storage. They also can be a reasonable choice for business critical applications in selected environments with less critical IOPS performance requirements (e.g. low access densities).
32
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
33
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Parity
(2) (4)
35
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Parity
Cache
36
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
700 Reads 50% Cache Hits = 350 Reads 300 Writes 2 (two mirrored writes) = 600 Writes a total of 950 physical IOps on the disks at the physical backend
RAID10 already outperforms RAID5 in a typical 70-30-50 workload.
!!! Consider using RAID10 if random write percentage is higher than 35% !!!
37 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
+ +
o +
+ +
+ o
38
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
RAID6 - Overview
RAID6: Dual parity RAID
DS8000: 5+P+Q+S or 6+P+Q arrays (using modified EVENODD code) Survives 2 erasures 2 drive failures 1 drive failure plus a medium error, such as during rebuild (especially with large capacity drives) Like RAID5, parity is distributed in stripes, with the parity blocks in a different place in each stripe RAID6 does have a higher performance penalty on write operations than RAID5 due to the additional parity calculations.
Space efficiency
87.5%
50% 75%
2
6
2010 IBM Corporation
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
40
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
RAID6
RAID5
RAID10
41
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
42
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
1956 IBM RAMAC (1st disk drive) 5 MB storage, 1200 RPM data transfer rate 8800 characters per second 2010 Enterprise FC Hard Disk Drive (HDD) 600GB storage capacity, 15000 RPM data transfer rate 122 to 204 MB/s
Performance Gap
HDD Capacity
Time
43
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Absence of mechanical moving parts makes SSDs significantly more reliable than HDDs
Wear issues are overcome through over-provisioning and intelligent controller algorithms (Wear-Levelling) Application benefits Increased performance for transactional applications with high random IO rates (IOps): Online Banking / ATM / Currency Trading, Point-of-Sale Transactions / Processing, Real-time data mining Solid state disks in DS8000 offer a new higher performance option for enterprise applications. Best suited for cache-unfriendly data with high access densities (IOps/GB) requiring low response times Additional benefit of lower energy consumption, cooling and space requirements (data center footprint)
44 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Single RAID5 Rank - Random Read Single RAID5 Rank - Sequential I/O SSDs show exceptionally low response times Sequential I/O: SSDs ~ HDDs
Source: IBM Whitepaper, IBM System Storage DS8000 with SSDs - An In-Depth Look at SSD Performance in the DS8000, http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101466
45 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Tier 2
Solid State Drive technology remains more expensive than traditional spinning disks, so the two technologies will coexist in hybrid configurations for several years. Tiered storage is an approach of utilizing different types of storage throughout the storage infrastructure. Using the right mix of tier 0, 1, and 2 drives will provide optimal performance at the minimum cost, power, cooling and space usage. Data Placement is key! To maximize the benefit of SSDs it is important to analyze application workloads and only place data which requires high access densities (IOps/GB) and low response times on them.
IBM System Storage DS8000 with SSDs - An In-Depth Look at SSD Performance in the DS8000 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101466 Driving Business Value on Power Systems with Solid State Drives ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03025usen/POW03025USEN.PDF
46 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
data
SSD whitepapers
Result: Many clients feel they cant afford solid-state storage yet
47
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Easy Tier optimizes SSD deployments by balancing performance AND cost requirements
Easy Tier delivers the full promise of SSD performance while balancing the costs associated with over provisioning this expensive resource
Slower, inexpensive
Just Right
Fast, expensive
LUN Heatmap
49
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
3X!
Easy Tier
Throughput (IO/s)
0:00
System configuration: 16x SSD + 96x 1TB SATA
2:00
4:00
6:00
8:00
10:00 Time
12:00
14:00
16:00
18:00
Source: Storage Performance Council, April 2010: http://www.storageperformance.org/results/benchmark_results_spc1#a00092 IBM Whitepaper, May 2010: IBM System Storage DS8700 Performance with Easy Tier, http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101675 50 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Smart data placement with Easy Tier: SPC-1 (SATA/SDD) SSD + SATA + Easy Tier Config vs. FC 15K HDDs Config
192 FC HDD
Dual frames
96 SATA + 16 SSD
Single Frame
15.00
Improves RT in range of ordinary use
10.00 5.00 0.00 0 10000 20000 30000 40000 50000 60000 Throughput (IO/s)
51
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Smart data placement with Easy Tier: SPC-1 Backend I/O Migration
6 5
% Capacity migrated
4 3 2 1 0 0 10 20 30 40 50 60 70 80 90 % Backend IO migrated
52
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
53
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Workload isolation (e.g. on extent pool and array level) dedicate a subset of hardware resources to a high priority workload in order to reduce impacts of less important workloads (protect the loved ones) and meet given service level agreements (SLAs) limit low priority workloads which tend to fully utilize given resources to only a subset of hardware resources in order to avoid impacting other more important workloads (isolate the badly behaving ones) provides guaranteed availability of the dedicated hardware resources but also limits the isolated workload to only a subset of the total subsystem resources and overall subsystem performance Workload resource sharing multiple workloads share a common set of subsystem hardware resources, such as arrays, adapters, ports single workloads now can utilize more subsystem resources and experience a higher performance than with only a smaller subset of dedicated resources if the workloads do not show contention with each other good approach when workload information is not available, with workloads that do not try to consume all the hardware resources available, or with workloads that show workload peaks at different times
Workload spreading most important principle of performance optimization, applies to both isolated workloads and resource-sharing workloads simply means using all available resources of the storage subsystem in a balanced manner by spreading the workload evenly across all available resources that are dedicated to that workload, e.g. arrays, controllers, disk adapters, host adapters, host ports host-level striping and multi-pathing software may further help to spread workloads evenly
54 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
55
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
56
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Agenda
Disk Storage System Selection & Specs Application I/O & Workload Characteristics Hard Disk Drive (HDD) Basics Its all mechanic HDD Performance & Capacity Aspects (SATA vs FC) RAID Level Considerations (RAID-5 / RAID-6 / RAID-10)
57
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
AIX
Linux Windows
# iostat (D) [interval] [no. of intervals] # filemon o fmon.log O lv,pv; sleep 60; trcstop
# iostat x [interval] [no. of intervals] # perfmon GUI, then select Physical Disk Counters
(b) Storage Subsystem Performance Data Collection: DS3k/DS4k/DS5k (SMcli), XIV (XCLI), DS6k/DS8k and other (TPC for Disk)
58 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
only counters for quantity of processed I/Os up to current point in time no counters for quality of processed I/Os as, for example, I/O service times additional host system performance statistics required for I/O response times
59 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Always collect the Performance Statistics together with latest Subsystem Profile to document the actual subsystem configuration used during data collection
60 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
For more information about how to collect and process these DS4000 performance statistics please see: How to collect performance statistics on IBM DS3000 and DS4000 subsystems (on IBM Techdocs) IBMers http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103963 IBM BPs http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD103963
61 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Verify if
62
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
63
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
IBM Tivoli Storage Productivity Center for Disk (TPC for Disk) is an optional component of
TPC, that is designed to manage multiple SAN storage devices and to monitor the performance of SMI-S compliant storage subsystems from a single user interface.
IBM Tivoli Storage Productivity Center Standard Edition includes three components of the TPC
suite as one bundle at a single price: TPC for Data, Fabric and Disk.
New customers with IBM System Storage Productivity Center (SSPC) which includes the preinstalled (but separately purchased) IBM Tivoli Storage Productivity Center Basic Edition only need to purchase the additional TPC for Disk component to be able to collect performance statistics from their supported IBM storage subsystems.
TPC for Disk is the official IBM product for clients requiring performance monitoring of their IBM
storage subsystems (e.g. DS4k, DS5k, DS6k, DS8k, SVC, ESS, 3584 Tape, ...)
TPC V4.1 introduces Tivoli Common Reporting (TCR) & BIRT (Business Intelligence Reporting
Tools) for creating customized reports from TPC database
66
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
67
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
68
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
69
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
70
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
71
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
DS4000 and other supported SMI-S compliant storage subsystems: By Storage Subsystem Dont forget to export a complete set of reports for the subsystem of interest, e.g. for a DS8000: By Volume 20080131-75APNK1-subsystem.csv, By Port 20080131-75APNK1-controller.csv, 20080131-75APNK1-ports.csv, Some reports may give more 20080131-75APNK1-arrays.csv, or less data, depending on 20080131-75APNK1-volumes.csv the exact level of SMI-S compliance by the vendor Limit the reports to a representative time frame as the supplied CIM agents. amount of data especially for the volume report can be extremly large!
72 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Regularly collect selected data sets for historical reference and do projections of workload
trends. Evaluate trends in I/O rate and response time and plan for growth accordingly. Typically response times increase with increasing I/O rates. Historical performance data is the best source for performance and capacity planning.
Watch for any imbalance of the overall workload distribution across the subsystem resources.
Avoid single resources from becoming overloaded (hot spots). Redistribute workload if needed.
When end-user performance complaints arise simply compare current and historical data and
look for appropriate changes in the workload that may lead to performance impacts.
Additional performance metrics may help to better understand the workload profile behind the
changes in I/O rates and response times:
Read:Write ratio
Read Cache Hit Percentage [%] avg. Read/Write/Overall Transfer Size [kB] per I/O operation
73
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Online Transaction Processing (OLTP) workloads (e.g. database) - small transfer sizes (4kB...16kB) with high I/O rates - low front-end response times around 5ms commonly expected Backup, batch or sequential-like workloads - large transfer sizes (32kB...256kB) with low I/O rates but high data rates - high front-end response times even up to 30ms still can be acceptable Subsystem level front-end metrics (subsystem total average): - Overall Response Time < 10ms Array level back-end metrics (physical disk access): - Back-end Read Response Time < 25ms - Disk Utilization Percentage << 80% - I/O rate: depends on RAID level, workload profile, number and speed of DDMs
considered very busy with I/O rates near or above 1000 I/Os (DS8000/DS6000)
Volume level front-end metrics (I/O performance as experienced by the host systems): - Overall Response Time < 15ms (depends on application requirements and workload) - Write-cache Delay Percentage < 3% (typically should be 0%)
75 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
Data Data
Report Document
Redbook: IBM Tivoli Storage Productivity Center V4.1 Release Guide http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html Chapter 10, Customized Reporting through Tivoli Common Reporting (TCR) / BIRT
76 2010-09-13 IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
77
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Business, Channel & Skill Enablement & Training DI Education & Briefings Demos & Showcases IT Transformation Roadmaps & Workshops BP Certification
78
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Our Services
Our Expertise
Client Briefings & Education Systems Lab Services & Training Customized Workshops System Storage Demos Advanced Technical Support Solution Design Proof of Concepts Benchmarks Product Field Engineering
Skilled technical storage experts covering the whole IBM System Storage Portfolio Information Infrastructure: - Compliance - Availability - Retention - Security HW / SW & Performance
Our Systems Lab Europe 1500 sqm lab space IBM & heterogenous hardware
79
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010
Disclaimer
Copyright 2010 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM's intellectually property rights, may be used instead. It is the user's responsibility to evaluate and verify the operation of any on-IBM product, program or service.
The performance information contained in this document was derived under specific operating and environmental conditions. The results obtained by any party implementing the products and/or services described in this document will depend on a number of factors specific to such partys operating environment and may vary significantly. IBM makes no representation that these results can be expected in any implementation of such products and/or services. Accordingly, IBM does not provide any representations, assurances, guarantees, or warranties regarding performance.
THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or
copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY, 10504-1785, U.S.A.
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010 2010 IBM Corporation
80
2010-09-13
Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
AS/400, e business(logo), eServer, FICON, IBM, IBM (logo), iSeries, OS/390, pSeries, RS/6000, S/30, VM/ESA, VSE/ESA, WebSphere, xSeries, z/OS, zSeries, z/VM, System i, System i5, System p, System p5, System x, System z, System z9, BladeCenter, System Storage, System Storage DS, TotalStorage For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.
81
2010-09-13
IBM Power Systems and Storage Symposium, Wiesbaden, Germany May 10-12, 2010