You are on page 1of 50

2011 IBM Corporation

GPFS
General Parallel File System
http://www-142.ibm.com/software/products/fr/fr/software
2 2011 IBM Corporation
IBM System & Technology Group
Agenda
Problmatique
Historique
GPFS : Le systme de fichiers
Qui utilise GPFS aujourdhui ?
Mise en uvre & Service associ
Support - Formation & Documentation
3 2011 IBM Corporation
IBM System & Technology Group
Problmatique
4 2011 IBM Corporation
IBM System & Technology Group
Problmatique
Dune faon fiable,
partager
et
changer simplement des fichiers
entre
diffrentes applications
et
diffrents OS
5 2011 IBM Corporation
IBM System & Technology Group
Copyright IBM Corporation 2008
1-5
IBM General Parallel File System (GPFS) History and evolution
2005 2002 1998 1995
Video
streaming
Tiger Shark
Real time
Streaming
Read Perf
Wide stripe
HPC
GPFS
General File
Serving
Standards
Portable
operating
system
interface
(POSIX)
semantics
-Large block
Directory and
Small file perf
Data
management
Virtual
Tape Server
(VTS)
Linux
Clusters
(Multiple
architectures)
IBM AIX
Loose Clusters
GPFS 2.1-2.3
HPC
Research
Visualization
Digital Media
Seismic
Weather
exploration
Life sciences
32 bit /64 bit
Inter-op (IBM AIX
& Linux)
GPFS Multicluster
GPFS over wide
area networks
(WAN)
Large scale
clusters
thousands of
nodes
GPFS 3.1-3.2
2006-7
First
called
GPFS
First
called
GPFS
GPFS 3.4 introduces improvements in performance, scalability, migration and diagnostics and enhanced
Windowshigh performance computing (HPC) server support, including support for homogenous
Windows clusters.
GPFS 3.4
Enhanced
Windows cluster
support
- Homogenous
Windows Server
Performance
and scaling
improvements
Enhanced
migration and
diagnostics
support
2009
GPFS 3.3
Restricted
Admin
Functions
Improved
installation
New license
model
Improved
snapshot and
backup
Improved ILM
policy engine
2010
Ease of
administration
Multiple-
networks/ RDMA
Distributed Token
Management
Faster failover
Multiple NSD
servers
NFS v4 Support
Small file
performance
Information
lifecycle
management (ILM)
Storage Pools
File sets
Policy Engine
6 2011 IBM Corporation
IBM System & Technology Group
PPC64
x86
x64
Le systme de fichier GPFS
7 2011 IBM Corporation
IBM System & Technology Group
Quest-ce que GPFS ?
GPFS (General Parrallel File System) est la solution IBM de systme de fichiers
partag parallle pour les clusters. Ses fonctionnalits en font un produit fiable et
performant pour toutes les architectures ncessitant un partage de donnes.
Ce logiciel est disponible sur tous les serveurs IBM ayant pour systme
dexploitation AIX, Linux ou Windows.
Cluster: 2 8000+ serveurs,
rseau performant fiable,
administr comme un
ensemble.
Systme de fichiers partag:
tous les serveurs accdent
aux donnes et metadonnes
par une interface de type
disque.
Parallle: les accs aux
donnes et metadonnes se
font en parallle de tous les
serveurs vers tous les
disques.
Serveurs GPFS
Switch
d'interconnexion
(LAN ou SAN)
Disques partags
8 2011 IBM Corporation
IBM System & Technology Group
GPFS Fonctionnalits et Domaines dutilisation
Une vision unique partage du systme de fichiers.
Une interface au Standard POSIX.
Haute performance
Grande capacit de stockage
Trs haut dbit
Haute disponibilit
Tolrance de panne
Gestion de configuration dynamique (ajout, retrait, remplacement de
disques, de serveurs)
Domaines dutilisation
Clusters haute performance scientifiques & commerciaux.
Consolidation de serveurs (de fichiers, de courrier, internet )
Multimedia
Bases de donnes
9 2011 IBM Corporation
IBM System & Technology Group
Ce que GPFS nest pas
Pas un systme de fichiers client-serveur
comme NFS ou CIFS: pas de goulot
dtranglement dun serveur unique,
pas de surcouche de protocole dans le
transfert de donnes.
Pas de serveur de metadonnes
unique : Pas de goulot
d'tranglement, pas de point
unique de panne
Clients
Rseau
Serveur
Disques
Donnes
Serveur de
metadonnes
Metadonnes
Disques
10 2011 IBM Corporation
IBM System & Technology Group
3 exemples de configuration
Cluster avec des noeuds
ddis E/S
Symmetric
cluster
Storage Area
Network
Tous les
nodes sont
attachs aux
disques
fibre channel,
iSCSI
Software Shared Disk
NSD (GPFS internal)
11 2011 IBM Corporation
IBM System & Technology Group
Les forces de GPFS
Grande capacit :
Regroupe de nombreux disques ou RAIDs
en un seul systme de fichiers
Dbit lev pour un fichier seul
striping sur un grand nombre de disques
Taille de bloc leve 16 K 4 Mo
Plusieurs serveurs crivent en parallle
Haute disponibilit
Serveurs: reconstruction journalise
bascule cohrente sur serveur de secours
Donnes: RAID ou rplication
Admin. dynamique (ajout/retrait/remplacement de
disques ou serveurs sans dmontage du systme de fichiers)
Image unique, interface au standard POSIX
locks distribus sur une syntaxe lecture/criture standard
12 2011 IBM Corporation
IBM System & Technology Group
GPFS
Disponibilit
Un daemon assure la surveillance des serveurs et la dtection
de panne
Un journal diffrent sur chaque serveur pour rparer plus vite en
cas danomalie
Une redondance rseau sur AIX par EtherChannel, sur Linux par
bonding
Chaque disque est accessible depuis deux serveurs au moins
Performance
Accs simultans de plusieurs serveurs un fichier (MPIIO )
Rpartition de la charge sur serveurs et disques
Permet de manipuler de grandes quantits de donnes
13 2011 IBM Corporation
IBM System & Technology Group
GPFS
Intgrit des donnes
Gestion des accs par jeton
Plusieurs chemins pour accder la mme donne
Possibilit de mettre en place une rplication
Pour donnes et mtadonnes, certains ou tous les fichiers
Les copies sont dans des failure group distincts
Quorum : pas de risque daccs simultans incohrents si le cluster se
coupe en deux ou plus
Flexibilit
Ajout dynamique de disques, de serveurs
Changement dynamique du nombre dinodes
Sadministre depuis nimporte quel serveur, diffusion automatique
tous les serveurs
Possibilit de crer une(plusieurs!) copie logique, ou snapshot, pour un
systme de fichiers GPFS,
14 2011 IBM Corporation
IBM System & Technology Group
Snapshots (ou Flashcopy)
Snapshot: copie logique en lecture seule du systme de fichiers un
instant donn
Pour quoi faire :
Sauvegarde: sauvegarde cohrente linstant t
Correction derreur: accs en ligne aux fichiers dans leur tat au moment du
snapshot
Copy-on-write: le fichier noccupe de lespace dans le snapshot
quau moment o il est modifi ou supprim.
Accs au travers de sous rpertoires .snapshots la racine
/gpfs/.snapshots/snap1 tat dil y a huit jours
/gpfs/.snapshots/snap2 tat dhier
On peut conserver jusqu 256 snapshots dun systme de fichiers
15 2011 IBM Corporation
IBM System & Technology Group
GPFS
ILM (Information Lifecycle Management )
Storage pool (groupe de luns de mme performance)
Fileset (sous-arborescence dans un systme de fichiers)
Policies (rgles de placement et de migrationdes fichiers dans les storage
pool )
Gestion simple du placement des donnes,
(ge, nom du fichier)
La politique de gestion peut tre excute en parallle sur lensemble des
nuds
La migration des donnes est automatique et transparente pour le client
Gestion du placement des donnes selon les types de disques
Langage de type SQL
RULE migration to system pool MIGRATE FROM POOL sp1 TO
POOL system WHERE NAME LIKE %file%
16 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
SAN
LAN
GPFS continue sur
le serveur 1, dont le
nom figure sur le tiebreakerDisk
comme lease manager .
tiebreakerDisk
Serveur 1
Serveur 2
Serveur 1
17 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
SAN
LAN
GPFS continue sur
le serveur qui a t lu
lease manager
sur le tiebreakerDisk
TiebreakerDisk
Serveur 1
Serveur 2
Serveur 2
Dmontage
De GPFS sur
Le serveur 1
18 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
SAN
Failure group 1
Rplication
Failure group 2
LAN
Failure group 3
DescOnly
GPFS continue sur les
deux serveurs, sans
Rplication.
Le failure group 3 garantit lexistence de plus
de la moiti des FSDs. On peut mettre le
tiebreakerDisk dessus.
Filesystem descripter Quorum
19 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
LAN
SAN
Failure group 1
Failure group 2
Failure group 3
DescOnly
GPFS continue sur le
Serveur 1
Serveur 1
Serveur 2
Serveur 3
20 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
LAN
SAN
Failure group 1
Failure group 2
Failure group 3
DescOnly
GPFS continue sur les
deux serveurs, plus de
rplication.
Serveur 1
Serveur 2
Serveur 3
21 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
LAN
SAN
Failure group 1
Failure group 2
Failure group 3
DescOnly
GPFS continue sur le serveur 2
plus de rplication.
Serveur 2
Serveur 1
Serveur 3
22 2011 IBM Corporation
IBM System & Technology Group
Haute disponibilit
LAN
SAN
Failure group 1
Failure group 2
Failure group 3
DescOnly
Serveur 2
Serveur 3
Serveur 1
GPFS continue fonctionner sur
le serveur 1, qui voit le serveur 3.
GPFS
dmont
serveur
isol.
23 2011 IBM Corporation
IBM System & Technology Group
Haute Disponibilit
LAN
SAN
Failure group 1
Failure group 2
Failure group 3
DescOnly
Serveur 1
Serveur 2
Server 3
Serveur 1 accde au failure
group 2 au travers du rseau
24 2011 IBM Corporation
IBM System & Technology Group
Rgles de quorum
3 serveurs avec lattribut quorum
GPFS continue fonctionner sur les serveurs qui voient au moins 2 serveurs ayant lattribut
quorum
2 8 serveurs avec lattribut quorum et 1 ou 3 tiebreakerDisks
GPFS continue fonctionner sur les serveurs qui voient le serveur quorum lu lease
manager dont le nom figure sur les tiebreakerDisks.
Par scurit, on peut mettre 3 tiebreakerDisks, GPFS continue fonctionner sur tous les
serveurs qui voient la majorit des tiebreakerDisks et le serveur quorum lu lease
manager .
Les tiebreakerDisks doivent tre relis par SCSI ou SAN aux serveurs ayant lattribut
quorum.
25 2011 IBM Corporation
IBM System & Technology Group
Rsumons nous ! Quest ce que GPFS ?
Hier GPFS tait un
systme de fichiers parallle haute performance
Aujourdhui, GPFS est un
systme de fichiers cluster haute performance
(incluant aussi le paralllisme )
Demain GPFS est un
systme de fichiers dentreprise avec de hautes
performances
26 2011 IBM Corporation
IBM System & Technology Group
Qui utilise GPFS aujourdhui ?
27 2011 IBM Corporation
IBM System & Technology Group
Qui utilise GPFS aujourdhui ?
Aviation et Automobile
Banque et Finance
Bio-informatiques and Sciences de la Vie
Dfense
Mdia numrique
EDA
Laboratoires Nationaux
Ptrole
Universit
Modlisation climatique
IBM
GPFS est un produit mature avec une prsence bien tablie sur le march. Il est
disponible depuis 1998 aprs une phase de dveloppement et de recherche
commence en 1991.
28 2011 IBM Corporation
IBM System & Technology Group
Government Laboratories
Army Corps of Engineers
Application: hydrological modeling
2 256-node SPs
Using GPFS to store model data, output visualization data
NOAA - NCEP, NCAR
Application: large-scale, highly precise
weather modeling
+00-node SPs
Using GPFS to store captured
observation data used as input to
weather models and to store output for
subsequent visualization
29 2011 IBM Corporation
IBM System & Technology Group
Gophysique
Capture, stock et analyse des donnes de sondage sismique,
Les donnes saisies bord,
Analyse ultrieure (hautement parallle, plusieurs traoctets
de systmes de fichiers)
Archives / rcupration partir de bandes
Plusieurs entreprises gophysiques utilisent GPFS pour leurs
traitements sismique
30 2011 IBM Corporation
IBM System & Technology Group
Ingnierie de la conception
Constructeur davions
Utilisation de GPFS pour stocker les quantits de donnes
importantes lies la modlisation des structures et
leurs analyses
GPFS permet le partage des modles et des donnes par
lensemble des noeuds
31 2011 IBM Corporation
IBM System & Technology Group
ntgratIon d'un serveur de grand voIume de stockage et pourvoyeur de
fIchIers en temps rel pour l'ensemble des unIts opratIonnelles.
un serveur centraI dans lequel seront stocks tous les fIchIers que T75
souhaIte utIlIser sans contraInte et en toute scurIt.
L'archItecture rseau sera capable d'absorber les dbIts demands (180
fIux de donnes 30hbItslsec ) , d'Intgrer les dIffrents lments
matrIels, aInsI que l'ensemble des statIons clIentes.
Le voIume de stockage sera de plusIeurs centaInes de tra Dctets
(150 To l 300 To ) 20 000 heures enregIstrement avec une mIse dIsposItIon
Instantane des donnes pour chaque statIon clIent.
Le systeme est Interconnect aux statIons cIIentes ( UnIx, WIndows ou
AppIe ) par une centaIne de flux quI auront un acces dIrect en lecture ou en
crIture.
Un trs haut nIveau de fIabIIIt sera exIg pour le serveur. Le dIsposItIf
propos sera calIbr afIn d'assurer une expIoItatIon 24 h sur 24, 7 jours sur
7, bnfIcIant d'un taux de dIsponIbIIIt de ,X. ( 5 mn par an )
TV5 Monde
32 2011 IBM Corporation
IBM System & Technology Group
HMC1
Serveur NIM
p520
Nuds GPFS
p520 (x6)
Controleurs DS4500 + Racks EXP100 X 78
Solution IBM
HMC2
SWSAN 1
CISCO
MDS 9500
SWSAN 2
CISCO
MDS 9500
SWLAN 2
CISCO 6500
SWLAN 1
CISCO 6500
Baie DS4800
Racks EXP710
X28
x2
x4
Baie DS4800
Racks EXP710
X28
x4 x4
x4
x2 x2 x2 x2 x2 x2 x2 x2 x2
x2 x2
CtrlA CtrlB
CtrlA CtrlB CtrlA CtrlB CtrlA CtrlB
CtrlA CtrlB
CtrlA CtrlB CtrlA CtrlB CtrlA CtrlB
Rseau Ethernet
Rseau GPFS
Rseau Admin
Rseau Hardware 1
Rseau SAN
Rseau Hardware 2
xSeries
Administration
X3
33 2011 IBM Corporation
IBM System & Technology Group
ASC (Advanced Simulation and Computing)
https://asc.llnl.gov/computing_resources/purple/
Lawrence Livermore National Laboratories en Californie
2005
100 teraFLOPS
12000 POWER5
http://www.top500.org/system/8128
http://www.top500.org/2007_overview_recent_supercomputers/ibm_eserver_p575
34 2011 IBM Corporation
IBM System & Technology Group
BlueGene/ (2004 pour BlueGene/L, 2007 pour BlueGene/P)
Lawrence Livermore National Laboratories en Californie
2007
596 teraFLOPS (crte)
130712 PowerPC
http://www-304.ibm.com/systems/support/bluegene/index.html
http://www.top500.org/system/8968
http://www.top500.org/2007_overview_recent_supercomputers/ibm_bluegene_l_p
35 2011 IBM Corporation
IBM System & Technology Group
IDRIS - BlueGene/P
Rgion Parisienne
2008
139 teraFLOPS (crte)
40960 PowerPC (10 racks)
400To
http://www.idris.fr/IBM2008/index.html
36 2011 IBM Corporation
IBM System & Technology Group
Produit IBM incluant :
- Un cluster GPFS linux sous RedHat avec un rseau interne InfiniBand, vu
comme un serveur NAS pour windows et Unix
- Une interface unique pour la configuration et le contrle
- Un service de surveillance distance et de reprise
- Rpartition de charge
- Stockage sur des supports adapts performance/cot (tiering)
- Utilis par IBM pour sa solution de storage cloud
SONAS Scale Out Network Attached Server
http://www-03.ibm.com/systems/storage/network/sonas/
37 2011 IBM Corporation
IBM System & Technology Group
SONAS
38 2011 IBM Corporation
IBM System & Technology Group
Mise en uvre et service associ
39 2011 IBM Corporation
IBM System & Technology Group
Mise en uvre & Service associ
IBM vous accompagne dans la dcouverte du produit
Cours au catalogue IBM
Prt du logiciel avec accompagnement forfaitaire sur la mise en uvre
Un expert GPFS
- Discute avec vous de votre projet
- Propose une architecture
- Installe une version de prt du logiciel
- Le configure avec transfert de connaissances dune demi-journe
- Rpond vos questions
- Vous remet un document dcrivant l'installation
Ensuite la plupart des clients sont totalement autonomes !
40 2011 IBM Corporation
IBM System & Technology Group
Support, Formation & Documentation
41 2011 IBM Corporation
IBM System & Technology Group
Support GPFS
GPFS est un logiciel IBM vendu avec un passeport avantage :
Correctifs en accs libre sur internet
Un forum public de discussion
http://www.ibm.com/developerworks/forums/forum.jspa?forumID=479
Le passeport avantage offre :
louverture d'incidents 24/24 7/7
le support et rponse aux questions
laccs aux nouvelles versions
42 2011 IBM Corporation
IBM System & Technology Group
Formation GPFS
43 2011 IBM Corporation
IBM System & Technology Group
Documentations & liens
03/2008 A Guide to the IBM Clustered Network File System, (REDP-4400-00)
02/2008 Deploying Oracle RAC 10g on AIX Version 5 with GPFS (SG24-7541-00)
..
10/2005 Configuration and Tuning GPFS for Digital Media Environments (TIPS0540)
..
05/1998 GPFS: A Parallel File System (SG24-5165-00)
Doc : http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.doc/gpfsbooks.html
FAQ : http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfs_faqs.html
http://www-03.ibm.com/systems/clusters/software/gpfs/index.html
http://www.almaden.ibm.com/StorageSystems/projects/gpfs/
http://en.wikipedia.org/wiki/GPFS
http://www.redbooks.ibm.com
44 2011 IBM Corporation
IBM System & Technology Group
WebSphere MQ & GPFS
http://www-01.ibm.com/support/docview.wss?uid=swg21392025
Testing of WebSphere MQ with IBM's General Parallel File System (GPFS) has
been more extensive and no problems have been reported to date.
http://www-01.ibm.com/support/docview.wss?uid=swg21433474
WebSphere MQ V7.0.1 introduces multi-instance queue managers. For this
you will need a shared file system on networked storage, such as a NAS, or a
cluster file system, such as IBM's General Parallel File System (GPFS).
Shared file systems must provide data write integrity, guaranteed exclusive
access to files and release locks on failure to work reliably with WebSphere
MQ.
45 2011 IBM Corporation
IBM System & Technology Group
GPFS v3.4
46 2011 IBM Corporation
IBM System & Technology Group
M E R C I !
Japanese
Hebrew
Thank You
English
Merci
French
Russian
Danke
German
Grazie
Italian
Gracias
Spanish
Obrigado
Brazilian
Portuguese
Arabic
Simplified
Chinese
Traditional
Chinese
Korean
Thai
Hindi
Tamil
go raibh maith agat
Gaelic
47 2011 IBM Corporation
IBM System & Technology Group
The following terms are registered trademarks of International Business Machines Corporation in the United States and/or other countries: AIX, AIX/L, AIX/L (logo), AIX 6 (logo),
alphaWorks, AS/400, BladeCenter, Blue Gene, Blue Lightning, C Set++, CICS, CICS/6000, ClusterProven, CT/2, DataHub, DataJoiner, DB2, DEEP BLUE, developerWorks,
DirectTalk, Domino, DYNIX, DYNIX/ptx, e business (logo), e(logo)business, e(logo)server, Enterprise Storage Server, ESCON, FlashCopy, GDDM, i5/OS, i5/OS (logo), IBM,
IBM (logo), ibm.com, IBM Business Partner (logo), Informix, IntelliStation, IQ-Link, LANStreamer, LoadLeveler, Lotus, Lotus Notes, Lotusphere, Magstar, MediaStreamer, Micro
Channel, MQSeries, Net.Data, Netfinity, NetView, Network Station, Notes, NUMA-Q, OpenPower, Operating System/2, Operating System/400, OS/2, OS/390, OS/400, Parallel
Sysplex, PartnerLink, PartnerWorld, Passport Advantage, POWERparallel, Power PC 603, Power PC 604, PowerPC, PowerPC (logo), Predictive Failure Analysis, pSeries,
PTX, ptx/ADMIN, Quick Place, Rational, RETAIN, RISC System/6000, RS/6000, RT Personal Computer, S/390, Sametime, Scalable POWERparallel Systems, SecureWay,
Sequent, ServerProven, SpaceBall, System/390, The Engines of e-business, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, Tivoli Ready (logo), TME,
TotalStorage, TURBOWAYS, VisualAge, WebSphere, xSeries, z/OS, zSeries.
The following terms are trademarks of International Business Machines Corporation in the United States and/or other countries: Advanced Micro-Partitioning, AIX 5L, AIX PVMe,
AS/400e, Calibrated Vectored Cooling, Chiphopper, Chipkill, Cloudscape, DataPower, DB2 OLAP Server, DB2 Universal Database, DFDSM, DFSORT, DS4000, DS6000,
DS8000, e-business (logo), e-business on demand, EnergyScale, Enterprise Workload Manager, eServer, Express Middleware, Express Portfolio, Express Servers, Express
Servers and Storage, General Purpose File System, GigaProcessor, GPFS, HACMP, HACMP/6000, IBM Systems Director Active Energy Manager, IBM TotalStorage Proven,
IBMLink, IMS, Intelligent Miner, iSeries, Micro-Partitioning, NUMACenter, On Demand Business logo, POWER, PowerExecutive, PowerVM, PowerVM (logo), Power
Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power PC, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software
(logo), PowerPC Architecture, PowerPC 603, PowerPC 603e, PowerPC 604, PowerPC 750, POWER2, POWER2 Architecture, POWER3, POWER4, POWER4+, POWER5,
POWER5+, POWER6, POWER6+, pure XML, Quickr, Redbooks, Sequent (logo), SequentLINK, Server Advantage, ServeRAID, Service Director, SmoothStart, SP, System i,
System i5, System p, System p5, System Storage, System z, System z9, S/390 Parallel Enterprise Server, Tivoli Enterprise, TME 10, TotalStorage Proven, Ultramedia,
VideoCharger, Virtualization Engine, Visualization Data Explorer, Workload Partitions Manager, X-Architecture, z/Architecture, z/9.
A full list of U.S. trademarks owned by IBM may be found at: http://www.ibm.com/legal/copytrade.shtml.
The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows, Windows NT and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of
the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others. Revised January 15, 2008
Special notices (cont.)
48 2011 IBM Corporation
IBM System & Technology Group
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing
benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of
these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++
Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN
and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other
software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Gotos BLAS Library for Linux were also used in some benchmarks.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
TPC http://www.tpc.org
SPEC http://www.spec.org
LINPACK http://www.netlib.org/benchmark/performance.pdf
Pro/E http://www.proe.com
GPC http://www.spec.org/gpc
NotesBench http://www.notesbench.org
VolanoMark http://www.volano.com
STREAM http://www.cs.virginia.edu/stream/
SAP http://www.sap.com/benchmark/
Oracle Applications http://www.oracle.com/apps_benchmark/
PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly
Siebel http://www.siebel.com/crm/performance_benchmark/index.shtm
Baan http://www.ssaglobal.com
Microsoft Exchange http://www.microsoft.com/exchange/evaluation/performance/default.asp
Veritest http://www.veritest.com/clients/reports
Fluent http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
Ideas International http://www.ideasinternational.com/benchmark/bench.html
Storage Performance Council http://www.storageperformance.org/results
Revised January 15, 2008
Notes on benchmarks and values
49 2011 IBM Corporation
IBM System & Technology Group
Revised January 15, 2008
Notes on HPC benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3 or AIX 5L were used. All other systems used previous versions of AIX. The SPEC CPU2000, LINPACK, and Technical Computing benchmarks were compiled
using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL
C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux, and
XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck &
Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM ESSL
for AIX, MASS for AIX and Kazushige Gotos BLAS Library for Linux were also used in some benchmarks.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
SPEC http://www.spec.org
LINPACK http://www.netlib.org/benchmark/performance.pdf
Pro/E http://www.proe.com
GPC http://www.spec.org/gpc
STREAM http://www.cs.virginia.edu/stream/
Veritest http://www.veritest.com/clients/reports
Fluent http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
AMBER http://amber.scripps.edu/
FLUENT http://www.fluent.com/software/fluent/fl5bench/index.htm
GAMESS http://www.msg.chem.iastate.edu/gamess
GAUSSIAN http://www.gaussian.com
ABAQUS http://www.abaqus.com/support/sup_tech_notes64.html
select Abaqus v6.4 Performance Data
ANSYS http://www.ansys.com/services/hardware_support/index.htm
select Hardware Support Database, then benchmarks.
ECLIPSE http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest&
MM5 http://www.mmm.ucar.edu/mm5/
MSC.NASTRAN http://www.mscsoftware.com/support/prod%5Fsupport/nastran/performance/v04_sngl.cfm
STAR-CD www.cd-adapco.com/products/STAR-CD/performance/320/index/html
NAMD http://www.ks.uiuc.edu/Research/namd
HMMER http://hmmer.janelia.org/
http://powerdev.osuosl.org/project/hmmerAltivecGen2mod
50 2011 IBM Corporation
IBM System & Technology Group
Revised October 9, 2007
Notes on performance estimates
rPerf
rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX
systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC
and SPEC benchmarks. The rPerf model is not intended to represent any specific public benchmark results and
should not be reasonably used in that way. The model simulates some of the system operations such as CPU,
cache and memory. However, the model does not simulate disk or network I/O operations.
rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the
time of system announcement. Actual performance will vary based on application and configuration specifics. The
IBM eServer pSeries 640 is the baseline reference system and has a value of 1.0. Although rPerf may be used to
approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is
dependent upon many factors including system hardware configuration and software design and configuration.
Note that the rPerf methodology used for the POWER6 systems is identical to that used for the POWER5 systems.
Variations in incremental system performance may be observed in commercial workloads due to changes in the
underlying system architecture.
All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM.
Buyers should consult other sources of information, including system benchmarks, and application sizing guides to
evaluate the performance of a system they are considering buying. For additional information about rPerf, contact
your local IBM office or IBM authorized reseller.

You might also like