You are on page 1of 444
HP BladeSystem Administration he646s a.01 Student guide — Part 1 Use of this material to deliver raining without prior written permission from HP is prohibited. ‘© Copyright 2007 HewletPackard Development Company, LP. The information contained herein issubjec 1 change without notice. The only warranties for HP producs and services are set forth inthe express warranty slotemenls accompanying such products tnd services. Nothing herein should be construed cs consiuling an additional warrenty. HP shall rot be liable for lechical or editorial errors or omissions contained herein. This is an HP copyrighted work that may not be reproduced without the writen peemission of HP. You may not use these materiale to deliver raining to ary person outside of your organization without the writen peemission of HP. Export Compliance Agreement Export Requirements. You may no! export or reexport products subject fo this agreement in violation cof any applicable laws or regulations. Withou limiting the goneraty fhe foregoing, produce subject his agreement may not be exported, rooxpored,cherwie ronefared too win for fo nalonal er reser of counts Under U.S. economic emborge and/or soncion incuding the llawing couries ‘Cubo, Iran, North Korea, Sudan ond Syrio. This lis is subject to change. In oddltion, products subject fo this agreement may not be exported, re-exported, or oherwise transferred o persons or eniies listed onthe U.S. Department of Commerce Denied Persons list; USS. Depariment of Commerce Ent is (15 CFR 744, Supplement); U.S. Treasury Deporiment Designated, Blocked Notionals exclusion lis; or U.S. State Department Debarred Parties Us; orto parles direcly or indrecly involved inthe development or production of nclear, chemical, or Biological weapons, missile, rocket systems, or unmanned air vehicles as specified in the US. Export Adminishation Regulations (15 CFR 744); or to pasties drecly or indiredlly involved in the financing, commision or support of lerrorist actives. By occepting this agreement you confirm that you are net located in (or national o resident of) cany country under U.S. embargo or sanction; not idenified en any U.S. Depariment of Commerce Denied Persons Lis, Eniy Ls, US Slate Department Deborred Parties itor Treasury Department Designated Nationals exclsion isnot direcly or indirectly involved inthe development or production of nuclear, chemical, biological weapons, missles, rocket systems, of unmanned oir ‘vehicles as specified in he U.S. Expor Adminstration Regulations (15 CFR 74a), and not directly ‘or indirectly involved inthe financing, commission o suppor of terrorist activites. Printed in USA. HP BlodeSystem Administration Student guide~ part 1 Seplember 2007 HP BladeSystem Administration HP BladeSystem Portfolio Introduction he646s 0.01 Rev. 6.21 1 HP BladeSystem Administration Rev. 6.21 HP BladeSystem Portfolio Introduction Module 1 he646s 0.01 HP BladeSystem Portfolio Introduction HP BladeSystem Administration HP BladeSystem Portfolio Introduction Objectives * Communicate the HP BladeSystem c-Class message * Discuss the CCl and p-Class products in the BladeSystem portfolio * Describe the features of the BladeSystem c-Class server blades and storage blades * Identify the tools that you can use to manage the BladeSystem c-Class * Name the server blade solutions and scenarios pertinent to the BladeSystem ¢Class ut 010 tn Doge ony 3 Objectives After completing this module, you should be able to: * Communicate the HP BladeSystem ¢Class message * Discuss the Consolidated Client infrastructure (CCl) and pClass products in the BladeSystem portfolio * Describe the features of the BladeSystem c-Class server blades and storage blades * Identify the tools that you can use to manage the BladeSystem c-Class * Name the server blade solutions and scenarios pertinent to the BladeSystem c-Class Rev. 6.21 13 HP Blades: HP BladeSystem Class messaging tem Administration HP BladeSystem c-Class messaging Consolidate Virtualize Sane Storage =, — XPower and 6, Si SAN Cooling Contectvity Fosilies Greater resource clficiency and flexibility Reduce time and cost to buy, build, and maintain revenuebeoring projects HP BladeSystem Portfolio Introduction Policy and Task Free IT resources for HP BladeSystem solutions provide complete infrastructures that include servers, storage, nebwi forking, and power to faci te data center integration and transformation. They enoble data center customers to respond more quickly and effectively to changing business conditions, lighten the load on the IT staff, and cut total ownership costs. ‘The HP BladeSystem agenda has kept pace with the changing needs of data center customers. These business requirements include: * Lower purchase and operations costs when adding or replacing compute/storage capacity Lower application deployment and infrastructure operations costs by reducing the number of IT architecture variants Reduce connectivity complexity and costs Allow easier, faster, and cheaper changes to server and storage setups without disrupting LAN and storage area network (SAN) domains ‘Allow faster changing or adding of applications with a more flexible infrastructure ‘Support grid computing and Service Oriented Architecture ‘Support third-party component integration with well-defined interfaces, such as Ethernet NICs/switches, Fibre Channel host bus adapters (HBAs)/switches, InfiniBand host channel adapters (HCAs)/switches The BladeSystem cClass has met those challenges by enabling IT to Consolidate — One modular infrastructure integrates servers, storage, networking, and management software that can be managed as one with a common, consistent user Rev. 6.21 experience. Virtualize — Pervasive virtualization enables you to run any workload, meet high availability requirements, and support scale out and scale logical, abstracted connections to LAN/SAN. up. Itenables you to create Automate — Free IT resources for more important tasks, enabling you to simplify routine tasks and processes to save time and maintain control. 1-4 HP BladeSystem Administration HP BladeSystem Portfolio Introduction The HP BladeSystem approach to simplified gay | a infrastructure cs ES os iF Virtual Connect architecture: Build IT change-ready a Presenting 1/0, Be) Prey Pict et en een and most Hlexblity eet The HP BladeSystem approach to simplified infrastructure The simplified approach of the BladeSystem c-Class makes IT change-ready, time-smart, and cost savvy. lis innovative design meets the changing needs of data center customers by speeding up selup — it can be accomplished in only 15 minutes per chassis with only a setup poster. The first blade operating system can be installed in just one hour. Other innovations inherent in the Class design include: * Virtual Connect architecture — Network and storage network connections can now be shared virtually rather than through hardware, allowing rapid changes to be made easily. Virtual Connect architecture eliminates manual coordination across domains. *+ Insight Control management — Unlike server blade only focused solutions, BladeSystem c- Class features builtin and bundled management capabilities. As a result, BladeSystem c- Class solutions provide a ready infrastructure that can be deployed on its own, in groups, or as a data center complement. * Thermal Logic technology — Thermal Logic technology provides the most energy efficiency ata the rack, row, and data center levels. Rev. 6.21 15 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Transition with coexistence = Some unified infrastructure management + Some connection to the data cenier ~ Power ‘Transition period Network/SAN Rack Se + Same slandard Proliant technologies Seppo er 7 veo lho lass Next generation f virtualization Slorage blades ond new fobries eee Transition with coexistence The current BladeSystem p-Class architecture was introduced in 2002 and will have a five- to six- year lifespan, shipping through 2007. Updates are still being made to pClass and the Consolidated Client Infrastructure (CCl), which is based on the e-Class (bc1500) blades. HP BladeSystem p-Class architecture and the cClass blade infrastructure will have a 12-18 month overlap. The ¢Class uses the same management tools as pClass systems, including Rapid Deployment pack (RDP), HP Systems Insight Manager (HP SIM), and integrated Lights-Out (iLO). The interconnects are compatible at a management interface level and the enclosures fit in the some rack as the p-Class systems. The entire cClass ecosystem has been designed with common specifications for blades, mezzanine cards, and signaling to ensure compatibility The HP mission in blades is to combine the best features from across the HP product portfolio into a bladed environment: * Business critical servers — HP-UX and Itanium support, potential for symmetric multiprocessing (SMP) design Personal systems — Workstation and PC blade solutions Software — Unified infrastructure management simplifies and automates setup, and monitors and virtualizes management StorageWorks — Multiple storage and information lifecycle management (ILM) solutions Services — Factory Express, Instant Capacity (iCAP), financial, datacenter planning, power/cooling The new blade environment is compatible with the following industry standards for interoperability, management, and control: * Simple Network Management Protocol (SNMP) Systems Management Architecture for Server Hardware (SMASH) Storage Management Initiative (SMI)-S Extensible Markup Language (XML) Web Based Enterprise Management (WBEM) Rev. 6.21 1-6 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Rev. 6.21 1-7 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HP Consolidation Client Infrastructure * Is a complete solution that: ~ Lowers desktop costs ~ Secures user data ~ Standardizes desktop solutions across the enterprise ~ Regains valuable desktop real estate * Simplifies management and deployment of user desktops * Delivers significant benefits ~ Incteased ease of virus isolation and eradication ~ Smoother and lower network traffic flows ~ Easier consolidation of other enterprise IT components = Connectivity from any location at any time = Shorter startup cycle |__| —Data center availability HP Consolidation Client Infrastructure The Consolidated Client Infrastructure (CCI) solution from HP centralizes desktop compute and storage resources into easily managed, highly secure dota centers using blade PCs, while providing the convenience and familiarity of a traditional desktop environment. CCI allows users on thin clients to connect to and work from application servers, such as the Proliant bc} 500 blade PC. The CCI architeclure incorporates the follewing components: * Access fier — Thin client devices are connected to a remote device using the Microsoft Remote Desktop Protocol * Compute tier — Load balancing hardware and software solutions are used in racks of blade PCs inside a data center to ensure that users reach an available machine. This layer transparent to the user. * Resource tier — A storage pool, network printers, application servers, and other networked resources, also inside the data center. Data in the CCl is written remotely to a SAN or network application server. Users on the client tier save their work to the central location and retrieve it through a local network, tunnel, or browser. Management tasks such as updates and redeploying desktops are performed on the blades remotely by the system administrator. Users establish a one-to-one connection with dynamically allocated blade PCs and centralized storage. Individual desktop images and data are stored and controlled in an IT infrastructure in the data center. CCI delivers the following benefits: * Simplified virus isolation and eradication * Smoother and lower network traffic between data centers and users * Easier consolidation of other enterprise IT components, including file servers, application servers, database servers, and storage systems * Conneciivity from any location at any time * Shorter startup cycle * Data-center levels of availability Rev. 6.21 1-8 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HP bc1500 PC blade * 3U form factor * AMD Athlon 64 1500+ processor + 512MB DDR SDRAM PC2700 expandable to 2GB - 333MHz Two SODIMM slots * Two Broadcom 5705F 10/100 integrated NICs * 40GB Ultra ATA/100 5400 rpm hard drive * Three-year limited hardware warranty HP bc1500 PC blade The HP bc1500 PC blade replaced the HP bc1000 PC blade in November 2005. It is sold in a single-pack or a 10-pack, and installs into the HP BL e-Class Blade PC Enclosure. The bc}500 PC blade runs Windows XP Professional with Service Pack (SP) 2. Features include: + Form factor — 3U form factor = 20 of these blades fit into the HP BL e-Class Blade PC Enclosure * Processor — One AMD Athlon 64 1500+ processor 1GHz 512KB L2 Cache 400 MHz FSB * Memory — 512MB DDR SDRAM PC2700 (333MHz) (1x512MB small outline DIMM [SODIMM]) standard — 2GB (two SODIMM slots total) maximum * Network controllers — Two 5705F 10/100 integrated network controllers = NIC Ais compliant with the Preboot Execution Environment (PXE) * Hard drives — One 40GB Ultra ATA/100 5400 rpm hard drive * Warranty — Three-year limited hardware warranty Rev. 6.21 19 HP BladeSystem Administration HP BladeSystem Portfolio Introduction BladeSystem p-Class BIDE Conners =— 6 Power delivery Power enlxure = 20 Server blader Power endorure= 1 BladeSystem p-Class The BladeSystem p-Class includes the following components: * Server blade * Server blade sleeve — Required for Proliant BL30p and BL35p server blades only * Server blade enclosure ~ Signal backplane — Power backplane — Management module * Power infrastructure 3U or 1U power enclosure with power supplies — Mini bus bar or scalable bus bar = 240-1 Power Enclosure Connector kit * Network interconnect options = Interconnedi switch = Patch panel * Storage connectivity options — Fibre Channel passthrough — Brocade 4Gb SAN Switch for HP BL p-Class BladeSystem — McDATA 4Gb SAN Switch for HP BL p-Class BladeSystem Note: Do not use the term server to refer to server blade enclosures or to the enclosure and server blades collectively. The term server, or more correctly, server blade, is used for the individual blades only. The server blade enclosure is the chassis that houses the server blades cand other components. The term system refers to all the components collectively. Rey. 6.21 1-10 HP BladeSystem Administration HP BladeSystem Portfolio Introduction BladeSystem p-Class server blades ee 2 at go BladeSystem p-Class server blades The BladeSystem pClass portfolio consists of these server blades: + BL20p G4 ~ High-performance 2P Intel-based blade designed with enterprise availability — One bay wide, full height — Midsier applications server blade — Ideal for dynamic web/application service provider (ASP) hosting, compulational cluster nodes, terminal server farms, and AV/media streaming + BL25p G2 — 2P server blade with AMD Opteron technology — One bay wide, full height — Designed for front-end and mid+ier computing with scalability — Ideal for customers who need faster local disk performance, higher memory capacity, and lower cooling requirements per rack Rev. 6.21 HP BladeSystem Administration HP BladeSystem Portfolio Introduction cs eer teat © 07 ata Dratenen Coeur? ” BL35p — 2P server blade with Opteron technology optimized for external storage — One bay wide, halfheight — Better density and higher performance per U = Ideal for High-Performance Computing (HPC) and data center environments facing rack space and power constraints * BL45p G2 — Designed for enterprise mi — Two bays wide, full height — Optional Fibre Channel support for SAN implementations and clustering — Ideal for multiserver applications such as dynamic web hosting, application server, terminal server farm, and media streaming * HP Integrity BL6Op — Fultheight, single-slot Intel ltanium-based server blade — Supports only HP-UX — Ideal for customers committed to the blade form factor and looking to add mission- critical operating system support ier applications with up to four Opteron processors Rev. 6.21 1-12 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Rev. 6.21 1-13 HP BladeSystem Administration HP BladeSystem Portfolio Introduction 7 BladeSystem c-Class server blades Traditional Blade rack-mount BladeSystem c-Class server blades A full portfolio of haltheight and full-height server blades is offered as part of the Class family. Like there are in the HP BL p-Class, there are Intel and AMD blades in 2P and 4P versions. The simplified approach of the BladeSystem ¢Class is demonstrated by the four server blade offerings, which capitalize on the convenience of traditional rackmount ProLiant servers and the processing power and modularity of the BL p-Class.. The dual-core Intel Xeon-based c-Class server blades use fully buffered DIMMs. (FB-DIMMs), which addresses the scaling needs of both memory capacity and bandwidth and enables memory to keep pace with processor and I/O improvements in enterprise platforms. FB- DIMM technology combines the highspeed intemal architecture of DDR2 memory with a bi- directional serial memory interface that links each FB-DIMM module together in a chain. NICs that support 2.5Gb Ethernet can be enabled with 10Gb interconnects. HP currently does not plan to introduce a 2.5Gb interconnect. Rev. 6.21 1-14 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Full-height server blade + 120F 16 DIMM slots + Intel/AMD x86 cr llonium processors + Two or four hotplug 2.5:inch SFF SAS drives * Three mezzanine card capacity + Four embedded NICs on * Two signal midplane connectors ~ Twice the connectivity of o 4U server blade * 10/100 Ethernet and I2C connectivity to the Onboard Administrator Full-height server blade The form factor of the c-Class server blades was driven by the need fo achieve higher memory density. This design called for the memory DIMMs to be perpendicular (not angled). The full: height server blades feature 12 or 16 DIMM slots with support for 1.5inch DIMMs. All Class server blades offer a choice of processor: + Intel/AMD x86 and Itanium processors + Low-voltage and dual-core models + Two or four processor sockets Internal storage is provided by two or four hotplug 2.5-inch small form-factor (SFF) Serial Attached SCSI (SAS) drives. Drive locations are on top (side by side) of the processors, so that the processors do not pre-heat the drives. Additional features of full-height blades include: ‘* Three four-port mezzanine cards + Four embedded NICs (two dual-port) * Two backplane connectors that offer twice the connectivity of a 4U server blade to all eight interconnect bays + 10/100 Ethernet and Inter IC (2C) connectivity to the Onboard Administrator + 14.42 inches (H) x 19.23 inches (D) x 2.025 inches (W) In addition, full-height c-Class server blades have a similar LED scheme as BL20p series blades with UID (blue), NIC activity, blade health, and a single LED for hard drive activity. Rev. 6.21 1-15 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HalFheight server blade * Infel/AMD processors + Two processor sockets * Eight DIMM slots ~ Support for 1.5:nch DIMMs + Two hotplug 2.5:inch SFF SAS drives + Two mezzanine card capacity + Full connectviy to eight interconnect bays + 10/100 Ethernet and IPC connectivity fo Onboard Administrator + Serial/USB/video connector ot front Half-height server blade The half-height server blade form factor enables support for a storage blade with six SFF hotplug diives for a synergy between server blodes and storage blades. Features of haltheight server blades include: + Intel/AMD processors Two processor sockets Eight DIMM slots = Support for 1.5:inch DIMMs Two hotplug 2.5:inch SFF SAS or Serial ATA (SATA) drives = SAS - 10K rpm, 36GB and 72GB ~ SATA-5.4K rpm, 60GB Two fourport mezzanine cards Two embedded NICs (one dual-port) Full connectivity to all eight interconnect bays 10/100 Ethernet and IC connectivity to Onboard Administrator + 7.115 inches (H) x 19.23 inches (D) x 2.025 inches (W) Like the Class full-height server blades, half-height server blades have a similar LED scheme as the BL20p series blades with UID (blue), NIC activity, blade health, and a single LED for hard drive activity. Rev. 6.21 1-16 HP BladeSystem Administration HP BladeSystem Portfolio Introduction ProLiant BL460c ProLiant BL460c Ahaltheight server blade, the Proliant BL460c can be configured with up to two duakcore Xeon processors. The BL460c supports up to 32 GB of memory with the new 8 GB FBD PC2-5300 2x4GB Memory Kit. iLO 2 is the fourth generation of HP Lights-Out management technology for delivering high- performance, outoFband remote management. With Virtual KVM performance, iLO 2 provides all of the capabilities required for administrative or maintenance tasks from a single remote ‘Two SFF SAS drives provide internal storage. SAS technology satisfies the enterprise data center requirement of scalability, performance, reliability and manageability, while leveraging a common electrical and physical connection interface with SATA. This compatibility provides users with unprecedented choices for server and storage subsystem deployment. Rev. 6.217 1.17 HP BladeSystem Administration HP BladeSystem Portfolio Introduction a ProLiant BL465c =e = er Proliant BL465c rode St orp The Proliant BL465c is a halt-height server blade based on two dual-core Opteron processors, which feature 2.8GHz/1GHz AMD HyperTransport links for I/O. The HyperTransport technology I/O link is a narrow, high-speed, low-power I/O bus. Opteron processors use HyperTransport links to connect to other processors or I/O. Compared to a shared or bi-directional bus, the HyperTransport pointto-point interconnect has the advantage of no overhead for bus arbitration and easier maintenance of signal integrity. The HyperTransport links provide 8GB/s of throughput between processors for maximum performance and scalability. Rev. 6.21 1-18 HP BladeSystem Administration HP BladeSystem Portfolio Introduction ProLiant BL480c ProLiant BL480c The ProLiant BL480c is a Xeon-based full-height server blade. Up to two dual-core Xeon processors are supported on each blade. The BL480c supports up to 48 GB of memory with the new 8 GB FBD PC2-5300 x4GB Memory Kit. Internal storage on the BL480c features four hotpluggable SFF SAS or SATA drives: * SAS - 10K rpm, 36GB, 72GB and 146GB * SATA-5.4K rpm, 60GB A fultheight blade has three mezzanine expansion slots for full, redundant connectivity. The mezzanine expansion slots hold the Type | or Type ll mezzanine cards, which enable connectivity to the interconnect bays. Inserting the server blade into the enclosure establishes connections between NIC/mezzanine cards and the signal midplane. The midplane circuitry carries signals from NIC/mezzanine cards to the interconnect devices and vice versa. Rev. 6.21 1-19 HP BladeSystem Administration HP BladeSystem Portfolio Introduction ProLiant BL685c mo | ProLiant BL685c The Proliant BL685c supports up to four dual-core Opteron processors. Internal storage in the ProLiant BL685c features two hotpluggable SFF SAS or SATA drives: + SAS - 10K rpm, 36GB, 72GB and 146GB. + SATA 5.4K rpm, 60GB Multifunction NICs support multiple fabric protocols, including Ethernet and iSCSI Fibre Channel. Rev. 6.21 1-20 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Integrity BL860c Integrity BL860c The Integrity BL860c is an Itanium 2 2P server blade with up to 48GB memory. It has two internal SFF SAS hard drives and a standard RAID 0/1 controller with optional Battery-Backed Write Cache (BBWC). It features four NICS, two of which are multifunction NICs, and three mezzanine expansion slots. Rev. 6.21 1-21 HP BladeSystem Administration HP BladeSystem Portfolio Introduction @ Operating system support ‘Windows See 7003 Sener Sandon dion [5 and 64 odo) Windows Serer 2008, erro eon? & 6a bi editors) ‘Windows 7000, Seto don | windows 2000, Earp ein [Red ot hed ta everitin 2 tn ion) | ——_[SISEINUL Ener 1909 dl =| re) ‘Ne NetWare & 5 Ope nr Saver Operating system support BladeSystem server blades support the following operating systems: + Microsoft Windows ~ Windows Server 2003, Web edition — Windows Server 2003, Server/Standord edition (32 and 64-bit editions) ~ Windows Server 2003, Enterprise edition(32 & 64 bit editions) — Windows 2000, Server/Standard edition — Windows 2000, Enterprise edition * Red Hat Linux — Red Hat Enterprise Linux 4 (32 and 64-bit editions) — Red Hat Enterprise Linux 3 (32 and 64it editions) SUSE LINUX — SUSE LINUX Enterprise Server 9 (32 and 64-bit editions) — SUSE LINUX Enterprise Server 10 (32 and 64-bit editions) + Novell NetWare — Novell NetWare 6.5 — Novell NetWare 6.5 Open Enterprise Server + VMware — VMware ESX Server 2.5 Rev. 6.21 1-22 HP BladeSystem Ad HP BladeSystem Portfolio Introduction Rev. 6.21 1-23 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Storage blades Usages ~ Dedicated storage blade + sB40€ * Standard PCle SAS storage controller card installed * Dedicated internal path to a server blade. = Dedicated /O blade '* Standard PCle card installed * Dedicated internal path to a server blade. ~ Shared storage blade * No PCle card installed * Dual paths to interconnects | Storage blades Storage blades enable you to manage data storage and retrieval in a multiple host environment. With a direct attach storage (DAS) blade, you can scale your storage resources easily by plugging another storage blade into the device bay adjacent fo a server. The signal midplane has a direct PCle link connecting adjacent paired device bays. For example, storage blades 1 and 2 are paired, 3 and 4 are paired, 5 and 6 are paired, and so forth through device bays 15 and 16. To activate DAS for a server blade in device bay three, you must install the storage blade into device bay 4. Because of the internal structure of the signal midplane, their order cannot be reversed. By tightly integrating the storage within the BladeSystem, the c-Class makes diskless booting possible. Additionally, storage blades make the cost for the shared storage similar to the cost of two drives in each server. Available in mid-2006, storage blade configuration in the c-Class will feature six 2.5” hotplug SAS/SATA drives in a haltheight blade form factor. Usages include: * Dedicated storage blade for DAS — Standard PCle SAS storage controller card installed * Dedicated I/O blade = Standard PCle card installed — Two available mezzanine slots that double the I/O capacity of the adjacent server blade * Shared storage blade — No PCle card installed — Dual internal paths to interconnect bays Rev. 6.21 1-24 HP BladeSystem Administration HP BladeSystem Portfolio Introduction wy Storage blade configurations '$B40¢ - Direc-attach Shared storage blade storage blade Single GUI for storage management and provisioning; easy setup for server administration * Oplinized for boot, swap, and ronmissioncriical data + Up to 87668 of foal storage + Single GUI for storage management + RAID 0,1, 5 and 6 w/ADG end provisioning + Simple way to add drive capacity + RAID O,1, 5 ond 6 w/ADG Storage blade configurations Storage blades can be used in two configurations; both provide up to six hotplug SAS or SATA drives and a single graphical user interface (GUI) for storage management and provisioning and easy setup for server administration: * DAS blade * Shared storage blade Directattach storage blade DAS blade provides storage for the adjacent blade. Benefits include: * Provides @ simple woy to add additional drive capacity — Add up to six hotplug SAS or SATA drives * Delivers high performance — Better performance than internal drives * Supports hardware-based RAID 0,1, 5 and 6 w/ADG Important! Because the c-Class enclosure device bays are paired, a DAS blade must be placed in the device bay to the right of the supported server blade. Shared storage blade The BladeSystem c-Class accommodates up to 15 storage blades within a single enclosure. Benefits include: * Optimized for boot, swap, and non-mission-critical data * Gels drives out of the server without giving up local storage (stateless blades) * Single GUI for storage management and provisioning * Hardware-based RAID 0,1, 5 and 6 w/ADG Rev. 6.21 1-25 HP BladeSystem Administration HP BladeSystem Portfolio Introduction BladeSystem Rev. 6.21 1-26 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Onboard Administrator module coats . ae 2h eT Hen Sop Cover a Onboard Administrator module New and unique to the BladeSystem ¢-Class, the Onboard Administrator is the enclosure management processor, subsystem, and firmware base used to support the c7000 enclosure and all the managed devices contained within the enclosure. It provides a single point of contact for users performing basic management tasks on server blades or switches within the enclosure. The Onboard Administrator module has two major functions: * Driving all management features through I2C and Intelligent Chassis Management Bus (CMB) interfaces. — Through I?C master, it controls and monitors all data and interrupts with every subsystem in the infrastructure and in each server. It provides I2C to fans, power supply modules, and interconnect and server blade bays. All subsystems have electrically erasable programmable read-only memory (EEPROM) to store field replaceable units (FRU) data. — Through ICMB, it shares information to the other infrastructure management modules at the rack level (for example, power and rack location). * Managing the 16 connections to the interconnect modules. — The 16 connections are concentrated to a single RJ-45 output point (no redundancy), which enables one IP address per infrastructure. - Users can access the Qlogic micro controller through an external serial port. The rear of each module has an LED (blue UID) that can be turned on (locally and remotely) and used to identify the enclosure from the back of the rack. The ¢7000 enclosure ships with one Onboard Administrator module installed and can support up to two HP Onboard Administrator modules. The system remains functional at its last configuration if either module is removed, but no additional blades or modules can be added. Note: The Onboard Administrator is not interchangeable with switch modules. Rev. 6.217 1-27 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Onboard Administrator features and functions * Monitors and manages elements of the enclosure * Provides local and remote management capability ~ Insight Display panel enables installation and configuration actions at the rack on ~ Browser access hora ond links permits remote _ serial comecors management through a GUI Insight Display Redundant Onboor Acministrator modules Onboard Administrator features and functions The Onboard Administrator features enclosure-resident management capability and is required for electronic keying configuration. It performs initial configuration steps for the enclosure, enables run-time management and configuration of the enclosure components, and informs users of problems within the enclosure through email, SNMP, or the Insight Display. The Onboard Administrator monitors and manages elements of the enclosure: * Shared power * Shared cooling * I/O fabric + ilo The Onboard Administrator also provides local and remote management capability through: * Insight Display — Provides setup wizards for easy installation — Enables diagnostics and configuration actions at the rack * Browser access — Permits remote management through a web-based GUI — Performs initial configuration steps for the enclosure Rev. 6.21 1-28 HP BladeSystem Portfolio Introduction Insight Display Main Menu. + Initial setup installation wizard ‘+ HP BladeSystem diagnostics + Power management + Enclosure management * Local administration + PiNbased security Configuration menu iene Peers Turn Enclosure UID off + Power settings i Soon + Onboard Administrator IP address + Enclosure Name * Rack Name * Insight Display Lockout PIN Insight Display ‘The Insight Display panel is designed for configuring and troubleshooting while standing next to the enclosure in a rack. It provides a quick visual view of enclosure settings and akaglance health status. Green indicates that everything in the enclosure is properly configured and running within specification. Other settings include: * Enclosure health * Enclosure name/rack name * Onboard Administrator IP address + Ambient temperature Main Menu From the Insight Display Main Menu you can run installation wizards, diagnostics, and other administrative and management tasks. For example, if you want to look at the enclosure settings, you would tap the down button to the next menu item. Configuration Menu From the Configuration Menu, you can configure the enclosure, update settings, and make changes directly from the rack. Enclosure settings available from the Insight Display panel include: * Power settings * Onboard Administrator IP address * Enclosure Name * Rack Name Insight Display Lockout PIN# Rey, 6.21 1-29 Pee Ce) HP BladeSystem elaine _HP BlodeSysiom Portfolio Introduction Ce ne nd eke nome eo eer vi Paver and ceding Meragorien! mode eee ea Hie 1a eee Cee ete bobsnii sal E eer ee i aat et : oe Insight Display/Onboard ‘Adi mii The Insight Display and the Onboard Administrator GUI target two forms of engagement: * Local rack technician — The onsite technician can perform the following tasks: Install power supplies and fans Place mezzanine cards on servers Install servers Install interconnects Power on ‘Access the Insight Display Complete Setup Wizard + Remote administration — After there is an IP address for the enclosure, a remote administrator can configure the enclosure: — Rolebased user account setup and privileges — Enclosure device monitoring and alerting — Power and cooling management mode settings and monitoring — Enclosure Bay iO IP addressing and Interconnect Bay IP addressing — Access to in Ss lual device configuration ul — Replication of settings across multiple enclosures — Configure interconnects — Install operating systems Rev. 6.21 1-30 HP BladeSystem Administration HP BladeSystem Portfolio Introduction easy Ss os ‘Server Essntials Storage Essentials Proliant | _ StorageWorks + Provision + Provisioning * Virtual Machine management + Database viewers (Orade, SQ, Sybase) + V2P, P2V migration + Exchange viewer + Performance Management * Fle systom viewor * Chargeback Unified Infrastructure Management The Onboard Administrator provides a secure interface and is fully integrated into all of HP system management applications. It can be managed locally, remotely, and through HP SIM tools. It integrates seamlessly with HP OpenView, RDP, Blade Essentials, and HP Control Tower, and offers web-based and command line interface (CLI) manageability. HP SIM and HP Essentiais for Proliant and Integrity servers and HP StorageWorks products offer control of the x86 environment through one unified management console. This integration: * Monitors health and performance of server blade, storage, and virtual resources * Automates deployment of physical and virtual resources * Delivers complete remote control of physical or virtual environment * Converts assets from physical to virtual and back again in minutes * Searches out fastest network connection and restores lost connectivity * Detects system failures and automatically initiates system recovery * Identifies and patches system, operating system, and application vulnerabilities Rev. 6.27 1-31 oe: 3g Es HP BladeSystem Administration HP BladeSystem Portfolio Introduction HP SIM 5.1 HP SIM 5.1 HP SIM is a tool that leverages the best practices of UNIX, Linux, and Microsoft Windows. administration. It manages a multiple operating system architecture and adapts to multivendor environments to enable secure access from anywhere. HP SIM 5.1 provides a simple platform to manage server and storage environments. It enables you to consolidate the management of hardware resources, reduce troubleshooting and asset management complexity, and increase control over physical and virtual elements within your server and storage infrastructure. Highlights of the features of HP SIM 5.1 include: * Added support for managed system configuration to include Linux, HP-UX, and Windows operating systems + New System Page Identity tab available for servers, clusters, complexes, partitions, storage arrays, storage hasts, storage switches, and tape libraries with specific information for each * Improved access to Discovery options, which include a Discovery page with tabs for Automatic, Manual, and Hosts Files configuration * Automatic discovery of network storage systems from controllers down to the logical unit number (LUN) Functionality built into HP SIM 5.1 includes the ability to: * View storage array capacity details, including unallocated space, RAID overhead, usable bytes assigned to ports, and usable bytes not assigned to ports * Set system properties for multiple systems at the same time + Suspend or resume monitoring of multiple systems at the same time * Create new command line tools * Create and delete categories and reports through application program interfaces (APIs) accessible from the command line interface (CLI) using XML files + License Manager provides support for ProLiant Essentials based licensing. This support includes license key distribution, reconciliation, and reporting. Rev. 6.21 1-32 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HP SIM and server blades HP SIM and server blades * With HP SIM, you can ~ View blades, enclosures, and racks = Obtain a visual indicator of a blade in a rock = Highlight curent blade = Aggregate infrastructure alerts ~ Query by rack or enclosure Associate management processors with blades ~ Click each blade to access a device-specific page He tche vigr en HP SIM offers special features for server blades that are enabled automatically when HP SIM. detects them. With HP SIM, you can: Rev. 6.21 View blades, enclosures, and racks Obtain a visual indicator of a blade in rack ighlight the current blade Aggregate infrastructure alerts Query by rack or enclosure Associate management processors with blades Click each server blade to access a device-specific page 1-33 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Task plug-ins * Virtual Machine Management * Provisioning * Performance Management Pack * Metering and chargeback * Vulnerability and Patch Management Task plug-ins HP SIM task plugrins provide additional value for HP BladeSystems. Customers add task plug-ins based on what is required for their environment and what they are willing to spend. By making the tasks modular, customers can add the tasks most important to them. These plug-ins make managing large numbers of server blades more efficient: * Virlual Machine Management — Manage your VMware or Microsoft Virtual Server hosts ‘and guests * Provisioning — Enables provisioning of a server with a script or image (integrated with RDP} * Performance Management Pack — Provides details on the performance of a server or group of servers, and diagnostics to assist in resolving performance bottlenecks * Metering and chargeback — Monitors usage of a server or group of servers for purposes of ing under the HP Utility Pricing program * Vulnerability and Patch Management — Allows you to deploy patches, hotfixes and service packs to your Microsoft and RedHat servers within o single HP SIM console Rev. 6,21 1-34 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HP BladeSystem Management Suite + Enhancement plug-in to HP SIM * Centralized component management ~ Server blades - Interconnects — Enclosures = Racks * Benefits ~ Easy-lo-use user interface — Single point of BladeSystem management = Folder management for organizing blade components HP BladeSystem Management Suite The HP BladeSystem Management Suite is an enhancement to HP SIM that provides a central location for accessing and managing all of the components in a blade infrastructure, including: * Server blades * interconnects * Enclosures = Racks HP BladeSystem integrated Management offers three main benefits for blade management: * Easy-to-use user interface * Single point of BladeSystem management * Folder management for organizing blade components Available os part of an enclosure ot as a softwareonly add-on, the HP BladeSystem ‘Management Suite integrates fully with HP SIM. It can only be installed with an initial installation of HP SIM, but you can upgrade the HP BladeSystem Management Suite separately from HP SIM. The HP BladeSystem Management Suite is available for a fee in a pack that includes a single copy of HP SIM and eight licenses for each of the following products: + RDP * Performance Management Pack (PMP) * Vulnerability and Patch Management Pack (VPM) Rev. 6.21 1-35 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Using RDP to deploy BladeSystem servers * Deployment Solution ~ Outofthe-box functionality for BladeSystem server blades ~ iLO management capabilities + Proliant Integration Module — Preconfigured scripts and batch files ~ Sample configuration jobs ~ Support software * SmartStart Scripting Toolkit ~ Linux and Win PE utilities and batch files : —__+] Using RDP to deploy HP BladeSystem servers RDP integrates HP and Altiris software to automate the process of deploying and provisioning server operating systems and software, enabling companies to quickly and easily adapt to changing business demands. RDP combines the graphical user interface (GUI}based remote console deployment capability of the Altiris Deployment Server with the power and flexibility of the HP SmartStart Scripting Toolkit, through the integration of the Proliant Integration Module. ROP features: * Deployment Solution — Out of the box functionality for BladeSystem server blades = ilO management capabilities * ProLiant Integration Module — Proliant server preconfigured scripts and batch files — Sample configuration jobs — Integrated support software + HP SmartStart Scripting Toolkit — Combination of Linux and Win PE utilities and batch files for configuring and deploying servers in a customized, predictable, and unattended manner (continued on the next page) Rev. 6.21 1-36 HP BladeSystem Administration HP BladeSystem Portfolio Introduction RDP and HP SIM RDP and HP SIM integrate to resolve server failures. To accomplish this, several servers must be dedicated as spares. When an in-service server fails, HP SIM detects it, activates RDP, and passes along the name of the failed server. RDP deploys the image of the server onto the spare server, which carries out the function of the failed server. This integration enables problem resolution without server downtime. Note: To deploy the server blades without RDP, you must create a bootable CD or an image of a bootable CD and use the iLO virtual media capabilities. Using RDP fo deploy clusters RDP automates repetitive tasks such as configuring and deploying servers. The implementation of a rapid deployment infrastructure significantly reduces IT costs associated with server deployment and redeployment. Key benefits include improved overall consistency of configurations across multiple servers and reduced IT resource requirements during large-scale deployments. Additionally, the iLO functionality built into ProLiont server blades can be a valuable resource for monitoring the deployment process on the target servers and recovering from issues that may occur during the deployment process. The Protiant server blade cluster deployment process consists of three phases: * Phase 1 consists of a job that accomplishes the server hardware configuration, installation of the operating system, and installation of the software required to connect to the (shared) Fibre Channel storage. * Phase 2 comprises manual configuration of both the cluster private interconnect (network) and the shared storage. The shared storage configuration includes the zoning configuration on the Fibre Channel SAN switches for the cluster nodes and storage, creating the logical volumes on the target storage subsystem, and presenting the volumes to the cluster nodes. * In Phase 3, a second job creates the appropriate volumes on the shared storage and deploys the cluster service to each cluster node, according to the operating system platform. You can combine these three phases of cluster deployment into a robust solution using RDP and the blade cluster deployment jobs. Note: HP SofiPaq SP24893 contains all of the scripts and sample configuration files used to deploy Proliant server blade clusters to StorageWorks SANs for Windows 2000 Advanced Server and Microsoft Windows Server 2003 Enterprise Edition. The SoftPaq also contains a setup script to import these files and the blade cluster deployment jobs to the Deployment Server Console for use. You can download the SoftPaq from: http://www-hp.com/go/support Rev. 6.21 7.37 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Rev. 6.21 1-38 HP BladeSystem Administration HP BladeSystem Portfolio Introduction BladeSystem adoption scenarios * IT consolidation ae * Multitier applications * SMP-to-Linux migration Ban * HPC *ccl Pes 2 Ee Tho new, modular building block of next-generation datacenters BladeSystem adoption scenarios Consider HP BladeSystem solutions first for new deployments of applications and services because of the underlying advantages of costs and efficiency that they can deliver to any workload in scale-out architectures. The main advantages include large gains in efficiency of personnel and improved processes. Other scenarios provide dramatic savings in acquisition costs over proprietary systems and improve TCO. These scenarios include: * Mconsolidation — By taking advantage of the pooling and virtualization capabilities of blade systems, enterprises can consolidate multiple, under-utilized, specialpurpose servers ‘onto a compact and versatile HP BladeSystem. Working with a variety of software pariners, HP enables an average consolidation onto virtual server partitions of at least 4:1. HP BladeSystem solutions also consolidate storage, network, and power, reducing complexity of cabling and driving greater ulilization of resources. Ideal candidates for IT consolidation include: Web and ecommerce applications Mail and messaging applications Microsoft Windows applications Thin client/terminal services Infrastructure applications * Multistier applications — With multiple data center resources networked together, HP BladeSystems provide access to virtual storage and networking. Enterprises no longer need a set of web, application, and database servers dedicated to multitier applications. Virtualization and load balancing across the blade farms yield greater management efficiency and improved use of multiple applications. Ideal candidates for multitier applications include: — Web and ecommerce applications; mail and messaging — Streaming media — Small and medium databases — Enterprise applications such as ERP and CRM ’ (continued on the next page) Rev. 6.21 1-39 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Rev. 6.21 SMP-to-Linux migration — Migrating from large UNIX SMP systems to highly efficient Linux server blade clusters can reduce platform costs 50 to 70% and result in significant longterm savings as a result of increased data center efficiency and simplified management. By leveraging virtual storage, cluster file systems, and scale-out multinode database environments, along with the scale-out flexibility of the HP BladeSystem architecture, enterprises can improve storage utilization and increase performance and availability—all ‘at lower cost than UNIX-based SMP servers. HP BladeSystem solutions based on industry standards also deliver lower annual support costs, lower costs to achieve high availability, and provide more control and flexibility. Ideal candidates for SMP-olinux migration include: = Custom (homegrown) applications (file-based and relational database-based) — Enterprise applications such as large scale-out databases, enterprise resource planning (ERP), and customer relationship management (CRM) requiring numerous processors High-performance computing (HPC) — Because of their lower infrasiructure and platform costs and their ability to take advantage of spare computing cycles, HP BladeSystem infrastructures are well-suited for high-performance computing applications. By capitalizing on the density, power efficiency, and integrated technologies of blade systems, enterprises can build large compute clusters, in conjunction with grid middleware, to handle the most intense HPC requirements. Ideal candidates for high-performance computing include: = Technical computation clusters for life sciences — Visualization clusters for entertainment or oil and gas industries = Financial and portfolio analysis using computation clusters HP Consolidated Client Infrastructure (CCl) — Centralize desktop compute and storage resources into easily managed, highly secure data centers, while providing users the convenience and familiarity of a traditional desktop environment. CCI provides a dynamic workplace solution that dramatically desktop TCO while raising levels of security, service quality, and ease of management. Ideal candidates for HP CCI solutions include: = large or widespread desktop computing environments — Costs of PC management and support are high — Security and consistency are key concerns 1-40 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Which architecture is right for customers? p-Class Class 1 \Customer profile | Customer profile + 1Gb Ethernet or less | + More than 1Gb Ethernet + 2Gb Fibre Channel or ess | + More than 2Gb Fibre Channel }+ No InfiniBand requirement | + InfiniBand requirement } No need to support more than | + Need fo support more than two | two fabrics fabrics simullaneously | Which architecture is right for customers? Customer profiles that fit the BladeSystem p-Class platform include customers who need: + 1Gb Ethernet or less * 2Gb Fibre Channel or less * No InfiniBand requirement * No need to support more than two fabrics Customer profiles that fit the BladeSystem c-Class platform include customers who need: * More than 1Gb Ethernet * More than 2Gb Fibre Channel * InfiniBand * Support for more than two fabrics simultaneously Te ensure a smooth transition, the pClass and ¢-Class have common management and power connectivity. In addition, both work with existing server/LAN/SAN structures. Rev. 6.21 1-41 HP BladeSystem Administration HP BladeSystem Portfolio Introduction , a ProLiant BL c-Class cluster support * IP foilover and network load balancing | — Distributes TCP/IP traffic across multiple servers * Application servers ~ Enable software scaling * High availability (failover) clustering | — Requires shared storage ProLiant BL Series cluster support Server blades, as a solution for Internet linkages and IP transaction routing, are cost-effective and scalable. Blades can operate as web servers and provide routing to application and data servers that coexist in the same rack, making a very dense packaged solution for web application processing. Proliant BladeSystems support various types of clustering, including: * IP failover and network load balancing — IP failover and network load balancing applications distribute incoming TCP/[P traffic among multiple servers, providing increased performance and faster response times. * Application servers — Application servers enable software scaling by increasing the available for an application. A management application coordinates the activities lual application servers. This typically involves managing the replication of data ications across the application servers so that they are all identical, and then routing incoming reques's to achieve lood balancing and availability across the application servers. Because the applications and data are stored locally, server blades provide a platform for an application server environment. + High availability (failover) clustering — Most application failover clustering requires that the nodes in the cluster be connected fo a shared storage system, such as the StorageWorks Modular Array (MA) 8000 Fibre Channel storage subsystem. ProLiant server blades support SAN connectivity and can be used in failover cluster environments. Rev. 6.21 1-42 HP BladeSystem Administration HP BladeSystem Portfolio Introduction High Performance Computing lechanical engineering! ‘Virtual prototyping Sclontii research . War, gaming : HPC problems ore characterized by: When frm * Computational, doteirtensve, or ‘ numerically intensive tasks + Complex compultions wih lorge dota sefs requiring exceptionally fos! throughput Aife and materials. . ~ Finance a a. Productlifecycle sign ovement — management Electronic design “cLassified "informatics Goo sciences High Performance Computing High Performance Computing (HPC), also referred to as High Performance Technical Computing (HPTC), means “compute or data intensive.” HPC has been defined by IDC as “Performing the most demanding computational, dato-intensive, or numerically intensive tasks. These tasks are characterized by any combination of complex computations or very large daia sets and require exceptionally fast throughput.” Examples * Automotive industry (MCAE) - HP customers include Audi, BMW, Ford, and Toyota. * Circuit design (EDA) ~ HP customers include Qualcomm and Nokia. * Entertainment — HP customers include Disney, DreamWorks, and Industrial Lights and Magic. * Financial services industries - HP customers include most securities firms. * Life and materials sciences — HP customers include GLAXO, Incyte, Pfizer, and Geneprot. * Oil and gas industries - HP customers include Shell, BP/Amoco, Anadarko Chevron/Texaco, Ocean Energy, and Exxon/Mobil. ‘Companies and institutions who use HPC conlinue lo invest in HPC technologies. Most companies realize that they must continually invest in HPC to remain competitive — even during difficult economic times. HPC systems save manufacturing companies millions of dollars by enabling them to virtually prototype their products before building physical models. In fact, physical prototypes reasingly are built only to validate the computer-generated models. In some cases, it is possible to test potential designs except through virtual prototyping. Companies use HPC not ‘only to design new products, but also to improve current designs. (continued on the next page) Rev. 6.21 1-43 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HPC systems are also used to simulate very complex systems. For example, weather forecasting is a very computationally intensive activity. Almost every government in the world has a meteorological center. Many military applications also require HPC systems. Major university and government research institutions are engaged in research activities that require HPC systems. Research may be focused on fundamental physics, genetics, drug discovery, or engineering. These applications are either computing intensive (number crunching), data intensive (requiri high /O performance), or bath. Most of such applications can run in parallel across many nodes in clusters of ProLiant or Integrity servers. Rev. 6.21 1-44 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HP server blades for HPC Blade advantages for HPC clusters ca Ceres rc Poe Deeeccml ceed Pieces HP server blades for HPC Blade advantage for HPC clusters The majority of HPC clusters have been built using rack-optimized servers such as ProLiant DL145, Proliant DL140, and others. In comparison fo rackoptimized servers, blades have the advantage of denser packaging, more integrated networking and storage, and simplified cabling and power. The benefits of blades applied to HPC clusters include: * Designed for performance and scalability * Reduced interconnect and network complexity * High density * Simplified server management * Centralized power management ‘The slide shows the advantages of server blades compared to rackoptimized servers in terms of rack density and network cable reduction. In HPC clusters, Class server blades offer: * Up to 64 Proliant halfheight server blades in a single rack * Simpler networking using interconnects within the blade enclosure Rev. 6.21 1-45 HP BladeSystem Administration HP BladeSystem Portfolio Introduction HPC pariner-driven software portfolio es Coe cee oo) Seo Manag Ce Job menegement oa ee) ao ote | Pr foes Cece ee aya nn HPC partner-driven software portfolio Hardware is only part of HPC cluster solution components. This slide shows a stack of software that is provided by HP partners in the areas of data management, software development, cluster management, and grid management. Rev. 6.21 1-46 HP BladeSystem Administration HP BladeSystem Portfolio Introduction g Summary Fs HP BladeSystem Families z * Industry-standard, scale out -ca Ey architecture pro ‘+ HP Blade Workstation Solution eS infrastructure optimized for + BladeSystem pClass server e¢ efficiency and change blodes * Benefits + BladeSystem Class server blades ° = Reduced data centerfoctprint Management and deployment = Lower connedivily costs and ora simpler cabling — Fewer spare parts el ~ Easier installation, configuration, Adoption environments ‘and management * IT consolidation ~ Higher data center productviy + Mulitier applications * SMP4oLinux migration + HEC +cal Summary HP BladeSystem solutions are indusiry-standard, scale-out architectures that provide a more efficient way to build an infrastructure optimized for efficiency and change. The main benefits of the BladeSystem include: * Reduced data center footprint * Lower connectivity costs and simpler cabling + Fewer spare parts * Easier installation, configuration, and management * Higher data center productivity HP offers a complete portfolio of modular blade products and solutions. The four solution families ore: Consolidated Client Infrastructure (CCl) HP Blade Workstation Solution BladeSystem p-Class server blades BladeSystem cClass server blades The primary management and deployment tools used with HP BladeSystem solutions are: + HP SIM * RDP Adoption of the BladeSystem servers and infrastructure spans IT consolidation, multitier application implementations, SMP-toLinux migration, high-performance computing, and consolidating clients to CCl Rev 6.27 147 HP BladeSystem Administration HP BladeSystem Portfolio Introduction Rev. 6.21 1-48 HP BladeSystem Administration HP BladeSy Enclosure Module 2 he64és 0.01 Rev. 6.21 2-1 BladeSystem ¢Class Enclosure i] m 5 4 HP BladeSystem Administration BladeSystem ¢Class Enclosure Objectives * Discuss the enclosures available in the BladeSystem c-Class * Describe the features of the HP BladeSystem c-Class c7000 enclosure * Identify the components of ihe enclosure * Explain the numbering schemes for device bays, interconnect bays, and fans * Explain how to configure redundant Onboard Administrator modules retone een 2 Objectives ‘Atter completing this module, you should be able to: * Describe the features of the HP BladeSystem c-Class c7000 Enclosure * Identify the components of the enclosure * Explain the numbering schemes for device bays, interconnect bays, and fans * Explain how to configure redundant Onboard Administrator modules Rev. 6.21 2-2 HP BladeSystem Administration BladeSystem c-Class Enclosure BladeSystem Class enclosure Key dilferentiators : Key diferenticiors + Absolute lowes entry price. *+ Best overll cost per blade + Easy local manogenent at 210 bodes wih BlodeS sem tol A + Hight scolble + Local KVM forse a * Optimized fr BladeSylem + Remote 10 for remot set tmenogement cls 9 + 220V singiephase power - chides =) * Optimized for larger, e uy eae datacenter deployments if + Up to thie fabrics if ~ 220 single ord roe pose ower ae * Upto four redundant i febres A BladeSystem c-Class enclosure positioning HI offering two cClass enclosures: * ¢3000 — A low-cost, small version targeted for remote sites and small and medium businesses * 7000 — An enterprise version designed for datacenter applications Both ¢-Class enclosures have common critical components such as servers, interconnects, mezzanine cards, storage blades, power supplies and fans. The focus in this course is on the 7000 enclosures, which is easy to set up because it features: * Cableless server installation * Enclosure manager wizard setup + Multiple enclosure setup functions Rev. 6.21 2-3 HP BladeSystem Administration BladeSystem -Class Enclosure More compact than p-Class pClass Class 16 BL20p server blades =—————> 16 BL20p equivalens More compact than p-Class When fully populated with haltheight blades, the enclosure has equivalent server capacity compared to the p-Class enclosure in a 29% smaller footprint. Rack compatibility You can install up to four BladeSystem class enclosures in a rack, for a maximum of 64 halt height server blades in one rack. The HP BladeSystem c7000 enclosure is compatible with the following racks: + ATU, 33U, and 25U HP Rack System/E + 42U, 36U, and 22U HP Rack 10000 Series and Compaq Rack 9000 Series racks Note: The system is optimized for HP Rack 10000 Series racks. * Telco racks * Third-party rack cabinets that meet the following requirements: — Width — 48.3cm (19 inches) — Depth — 73.7cm (29 inches) between front and rear RETMA rails ~ Clearance — 7.6cm (3 inches) minimum clearance between rear RETMA rails and rear rack door to accommadate system cabling — Open area — Minimum of 65% open area to provide adequate airflow through any rack front or rear doors The enclosure also can be used in a rack-ree environment. The following conditions must be met when performing a rack-free installatic * A fully-populated enclosure can weigh up to 450 pounds (or 204.2 kilograms). The object supporting the enclosure must be flat and able to withstand this weight. Rev. 6.21 24 HP BladeSystem Administration BladeSystem cClass Enclosure ¢7000 enclosure — Front ti sieroge + Two blade form factors a | oe ~ Fultheight server blade ect (up to 8 per enclosure) ~ Haltheight server blade (up to 16 per enclosure) — Haltheight storage blade (up to 90 drives per enclosure) + Power supplies = Up to 6 hotplug power supplies @ 2250W + Management ~ BladeSystem Insight Display = Onboard Administrator ¢7000 enclosure — Front The c7000 enclosure supports two blade form factors: * Fultheight server blade (up to 8 per enclosure) + Hall-height server blade (up to 16 per enclosure) Up to 15 halfheight storage blades can be inserted into the device bays os well, for a total of 90 drives per enclosure. There are six hotplug power supplies rated at 2250W each. The power supplies can be configured with N+N redundancy or N+1 redundancy. ‘An integrated HP BladeSystem Insight Display is linked to the BladeSystem Onboard Administrator for local and remote system management. Halt-height server blade shelf The 7000 enclosure ships with four shelves to support halfheight devices. To install a fultheight device, remove the shelf and the corresponding blanks. Note: If you are using fultheight server blades in the enclosure, any emply fultheight device bays should be filled with blade blanks. To make a fullheight blank, hwo haltheight blanks must be joined together. Rev. 6.21 25 HP BladeSystem Administration BladeSystem cClass Enclosure ¢7000 enclosure — Rear ee Lape (ER Guo 7000 Enclosure — Rear The rear of the c7000 offers eight interconnect bays. Each bay can support a variety of possthru modules and switch technologies, including Ethernet, Fibre Channel, and InfiniBand. The enclosure supports up to four redundant |/O fabrics and yields up 10 94% reduction in cables compared with rack-mounted server configurations. For redundant Pass-Thru and switch configurations, each interconnect would have a redundant module in the bay next to it. For high-performance switch applications, two neighboring bays can be combined into one. A common application would have the bottom four bays combined into two bays supporting redundant 10Gb Ethernet or 20Gb InfiniBand switches. The Onboard Administrator module is located below the switch bays and is hardware- redundant. * One Onboard Administrator module ships standard; the enclosure supports a maximum of Wo. * Each Onboard Administrator module has one Ethernet and one serial port. The enclosure link module links enclosures in a rack in a manner similar to the p-Class. Enclosure links are designed to support only Class enclosures in the same rack. Rev, 6.21 2-6 HP BladeSystem Administration BladeSystem cClass Enclosure 7000 enclosure — Side wilh Signal midplane” Power backplana ‘AC input module ¢7000 Enclosure — Side Signal midplane The signal midplane functions as a PCI-Express bus and supporis electronic keying. It provides the following pathways: * 10/100 and Inter-integrated Circuit (I2C) to server bays * 10/100, RS232, and I2C to interconnect bays * PC to fans + °C to power supply modules Power backplane ‘The power backplane carries 12V DC to server blades, fans, and interconnects. All power supplies feed into a single sheet of copper for distribution to the devices. Power connectivity HP offers the c7000 enclosure with a choice of single-phase or threephase AC input modules. If needed, customers can constrain the maximum BTUs per enclosure and rack. Rev. 6.21 27 HP BladeSystem Administration BladeSystem ¢-Class Enclosure Signal midplane Signal midplane : The signal midplane transfers 1/O signals (PCLExpress, Gigabit Ethernet, Fiber Channel) between the server blades and the appropriate interconnects. It is also transfers signals between the server blades, fans, power supplies, and the Onboard Administrator. The central section of the midplane supports eight individual 180-pin interconnect connectors— one for each interconnect bay (labeled SWM in the slide graphic). The connectors support eight individual switches, four double-bay switches, or a combination of the two. Each haltheight server blade has a 100-pin connector with 64 high-speed signal pins. This allows up to 16 lanes for connectivity to LAN, SAN, InfiniBand, or other type of interconnect. The lanes are hard-wired between the server bay connection (labeled SVB in the slide graphic) and the switch module connector. Full-height server blades have up to 32 lanes available, for up to 320Mb of available //O bandwidth per server. Rev. 6.21 28 HP BladeSystem Administration BladeSystem cClass Enclosure 7000 enclosure device bay numbering Fultheight device Haltheight device bay numbering bay numbering 7000 enclosure device bay numbering Fulkheight server blades should be populated from left to right, when viewing from the front of « the enclosure. Haltheight server blades should be populated from lop and bottom from left to right from the © front of the enclosure. So the first two half-height server blades would be placed in bays 1 and 9 © the second two haltheight server blades would be placed in bays 2 and 10 and so on until the enclosure is full. Important! When looking at the rear of the enclosure, server blade bay numbering is reversed. The Onboard Administrator ensures that server blades are correcily placed before allowing systems to power on. Caution: To prevent improper cooling or thermal damage, do not operate the server blade or the enclosure unless all device bays are populated with either a component or a blank. Rev. 6.21 2.9 HP BladeSystem Administration BladeSystem cClass Enclosure ¢7000 enclosure interconnect bay numbering ny c7000 enclosure interconnect bay numbering Each enclosure requires interconnects to provide network access for data transfer. The interconnects reside in bays located on the rear of the enclosure. The rear-mounting interconnect modules are designed for redundant interconnections between server blades and switches. Rev, 6.21 2-10 HP BladeSystem Administration BladeSystem Class Enclosure SC a i D ceed ¢Class enclosure fan numbering «The enclosure is divided into four zones, Fans in each zone provide cooling for the blades in that zone, plus redundant cooling for other blades. The enclosure ships with four HP Active Cool fans and supports up fo 10 fans. The number of fans that you order affects where you will install the fans. * Four fans: Install in bays 4 and 5, 9 and 10 * Six fans: Install in bays 3, 4, and 5, and 8, 9, and 10 + Eight fans: Install in bays 1, 2, 4, and 5, and 6, 7, 9, and 10 Important! Install fan blanks in any unused fan bays. Rev. 6.27 2-11 HP BladeSystem Administration BladeSystem c-Class Enclosure Class AC input module * Single-phase power configuration with 3+3 C19 plugins * Three-phase power configuration cClass AC input modules With a single-phase power configuration, connect the AC power cables fo the power connectors ‘on the rear of the enclosure corresponding to the power supply that was populated on the front of the enclosure. In three-phase power configurations, the AC power cables are already connected to the AC input module. Rev. 6.21 2-12 HP BladeSystem Administration BladeSystem Class Enclosure 5 Standby Onboard Administrator * Second Onboard Administrator in the enclosure = The right Onboard Administrator slot inthe rear of the enclosure = The left Onboard Administrator is the primary Onboard Administrator by default * Ifa failover occurs — The right Onboard Administrator becomes the active Onboard | Administrator * When firmware versions between the two Onboard Administrators are incompatible — No redundancy ~The error displays and alerts the user fs) m 7 Standby Onboard Administrator When a second Onboard Administrator is placed into the enclosure, it becomes the standby » Onboard Administrator. It should be placed in the right Onboard Administrator slot in the rear of the enclosure. The left Onboard Administrator is the primary Onboard Administrator by default. You can force a failover within the Onboard Administrator user interface. In this case, the right Onboard Administrator becomes the active Onboard Administrator and the Onboard Administrator on the left becomes the standby module. In rare instances, the two Onboard Administrator modules might not be redundant. This error can occur when firmware versions between the two Onboard Administrators are incompatible. The Insight Display and the main status screen of Onboard Administrator will identify this error and will alert the user through Simple Network Management Protocol (SNMP), if enabled. Note: You can hot pug the Onboord Administrator modules but they are not hot swappable. At failover, the redundant Onboard Administrator will take over enclosure management, but will not assume the IP Address of the original Onboard Administrator. If you replace the failed Onboard Administrator in the default location, it will delete all the management settings currently in place. To avoid this, it is necessary to remove the operating Onboard Administrator and install it in the default location and then install the new unit in the passive (right) Onboard Administrator bay. Rev. 6.21 2-13 HP BladeSystem Administration BladeSystem c-Class Enclosure Onboard Administrator cabling Onboard Administrator cabling + RES — Connects to the management network using a CATS patch cable * USB — For future USB connections; not currently supported * Serial connector — Used for the command line interface (Cl); connects to a laptop or computer with a NULL-Modem Serial cable (RS232) * Enclosure linkdown port — Connects to the enclosure up port on the enclosure below with a CATS patch cable + Enclosure linkup port = Connects to the enclosure down port on the enclosure above with a CATS patch cable — Ona standalone enclosure or top enclosure in a series of linked enclosures, the top enclosure up port functions as a service port and temporarily connects to a PC with a CATS patch cable Enclosure link module The enclosure link module is separate from the Onboard Administrator module and is contained within the Onboard Administrator module sleeve. The uppermost enclosure up port functions as a service port that provides access to all the c- Class enclosures in a rack. If no enclosures are linked together, the service port is the top enclosure up port on the enclosure link module. linking the enclosures enables the rack technician to access all the enclosures through the open up (service) port. If you add more c-Class enclosures to the rack, you can use the open enclosure up port on the top enclosure or the down port on the bottom enclosure to link to the new enclosure. Important! The HP BladeSystem ¢-7000 Enclosure link ports are not compatible with the HP BladeSystem p-Class enclosure link ports Rev. 6.21 214 HP BladeSystem Administration BladeSystem ¢-Class Enciosure a m 5 Rev. 6.21 2.15, HP BladeSystem Administration BladeSystem c-Class Enclosure Rev. 6.21 2-16 HP BladeSystem Administration Blade Server Enclosures Module 3 he646s 0.01 Rev. 5.41 3-1 HP BladeSystem Administration Blade Server Enclosures a] Objectives * Compare the original blade server enclosure with the enhanced blade server enclosure + Perform a blade server enclosure upgrade * Describe the blade server enclosure and identify its LEDs * Describe a blade server sleeve Objectives After completing this module, you should be able to: * Compare the original blade server enclosure with the enhanced blade server enclosure * Perform a blade server enclosure upgrade * Describe the blade server enclosure and identify its LEDs * Describe a blade server sleeve Rev. 5.41 3-2 HP BladeSystem Administration Blade Server Enclosures Introduction + Integrated infrastructure + Reduced cabling * Features and benefits = Wired-once — Insert, provision, and repurpose servers and network switches without touching another cable — Built-in management — Each enclosure knows the physical location of each server or switch plugged into the enclosure ~ Availability — Hotplug interconnect switches and management modules have redundant power — Flexibility by design — Intel and AMD-based servers and network options can be mixed in the same enclosure ~ Serviceability — Tool-less removal of backplanes and management module ~ Dimensions — 6U high, standard 48 cm (19 inches) wide, and 73.30 cm (28.86 inches) deep Introduction The HP BladeSystem advances and optimizes the principles of a rack environment by repackaging the entire infrastructure and integrating it with network, storage, and power. A variety of solutions are possible with the many possible combinations of modular components within the integrated infrastructure. Each HP ProLiant BL p-Class enclosure holds up to 16 blade servers and two network options ‘and connects them to a common backplane. This design eliminates the need for multiple cables for each component, reducing cables by up to 87%. The enclosure also provides interconnection to other enclosures, shared power, and networked storage. The features and benefits of this design include: * Wired-once — Insert, provision, and repurpose blade servers and network switches without touching another cable. * Built-in management — Each enclosure knows the physical location of each blade server or switch plugged into the enclosure, proactively monitoring and reallocating resources as ne * Availability — Hotplug interconnect switches and management modules have redundant power; all bays are individually fused to protect the backplane. + Flexibility by design — Intel and AMD-based servers and network options can be mixed in the same enclosure, share a centralized power system, and run different operating systems and applications. The enclosure itself installs easily in HP, Telco, and third-party racks. * Serviceability — Toolless removal of backplanes and management modules without removing the enclosure from rack or blades from the enclosure. + Dimensions — 6U high, standard 48-cm (19-in) wide, and 73.30-cm (28.86-in) deep. Rev. 5.41 3-3 HP BladeSystem Administration Blade Server Enclosures Blade server enclosure overview Blade server enclosures enable wireonce connectivity of blade servers to virtual LANs (VLANS), storage, and power. Each enclosure includes a builtin management module that reports thermal, power, and protection fuse events and provides asset and inventory information. The management module and interconnects extend scalability beyond the enclosure, enabling each server fo communicate with other blade server enclosures. Most components are hotpluggable and can be removed with the power on using no tools. A blade server enclosure houses multiple blades in a compact, precabled chassis. Each enclosure has 10 bays, with eight bays for up to 16 blade servers. The two ouiside bays hold the interconnect options—either the RJ-45 Patch Ponel 2, GbE2 or CGESM interconnect switches. The graphic shows Interconnect A on the left and interconnect B on the right, with two BL20p servers installed in server bays 1 and 2. This configuration enables the blades to share common resources such as power supplies, cooling fans, and interconnects. The 6U rack-mountable enclosures are easily installed in slandard 22U, 36U, and 42U racks with springloaded rack rails and thumbscrews. The blade server enclosure provides these facilities to the blade servers: + DC power * Network interconnects and cabling routes. + SAN interconnects and cabling routes * Management and monitoring The number and type of blade servers determines the required quantity of enclosures in a system. The maximum number of blade servers per enclosure is: + Proliant BL20p and BL25p — 8 * Proliant BL30p and BL35p — 16 + Proliant BL40p — 2 * Proliant BLASp — 4 Rev. 5.41 34 HP BladeSystem Administration Blade Server Enclosures Original blade server enclosure — Rear view * Each enclosure is equipped with ~ Management module ~ Signal backplane Power backplane Original blade server enclosure — Rear view ProLiant BL p-Class blade servers were first introduced with a 6U enclosure that houses the blade «servers and network interconnects. The blades slide into the blade server enclosure backplanes for power and network connections. Each original enclosure is equipped with a: * Management module * Signal backplane * Power backplane ‘The management module communicates with the Integrated Lights-Out (iLO) application-speci integrated circuit (ASIC) located on the server system board. Enclosure status information is gathered by the management module and passed over the Intelligent Chassis Management Bus to the iLO ASIC. This information is available on corresponding tabs of the iLO web-based interface from any blade server. El 4 Pvc Because power runs across the entire power backplane, inserting a blade server into the enclosure enables iLO to automatically power up and become fully functional. Even if the blade server is powered down, the essential iLO management functions are still available. Patch panels and interconnect switches may be mixed within the rack, but not within the same enclosure. A full rack could have as many as 192 CATS cables; 32 cables are attached for a full enclosure if using patch panels. Network interconnect switches each have six external Ethernet ports. Note: The interconnects are designated (viewing from the front) as interconnect A, the left interconnect, and interconnect B, the right interconnect. When you are facing the rear of the enclosure, interconnect A is on the right and interconnect B is on the left. Rev. 5.41 35 HP BladeSystem Administration Blade Server Enclosures Enhanced blade server enclosure Enhanced blade server enclosure Because the Proliant BL30p and BL35p servers are twice as dense as the other members of the p-Class blade server family, they also require double the power lor the same amount of space. This need for greater power headroom has driven chonges to the original BL p-Class blade server enclosure. Key changes in the enhanced enclosure include: * New management module — In the original enclosure, all blade server ilO signals are routed to one of the interconnect modules (switch or patch panel). In the enhanced blade server enclosure, iLO signals are routed to the back of the enclosure to a single aggregated iLO port on the management module. The single 10/100T ilO port carries separate signals for all installed blades (up to 16 with a double-dense blade). This aggregates the physical wiring connectivity, but each blade iLO still has a unique IP address. The management module is mounted on the signal backplane and is hot pluggable; hardware changes do not affect inservice blade operation. The blade management module: ~ Monitors the operation of the enclosure and blades — Reports thermal and operating events by communicating with the iLO ASIC — Connects to the enclosure with RJ-45 cables Note: iLO port aggregation does not mean that the management module has become a single point of failure. Remote iLO access might be down until the module is replaced, but you can still access local ilO through the front I/O port. The risk of such a failure is extremely low and there are easy workarounds; the blades would still be accessible using other tools such as Microsoft Terminal Services or virtual network computing (VNC). * Power infrastructure — Enclosure DC is split between A and B rails and not shared at the power backplane, as was the case with the original blade server enclosure. (continued on the next page) Rev. 5.41 3-6 HP BladeSystem Administration Blade Server Enclosures * Static IP Bay Configuration Utility — An alternative to DHCP and iLO-by-ilO static IP assignment, the utility automatically assigns addresses from the reserved static pool as the blade iLOs power on, even if DHCP is present. The system automatically reserves a block of 16 addresses starting with the first one set by the user. The enhanced enclosure is fully backward-compatible so it supports all current and future BL p- Class servers. The enclosure also supports all existing and future interconnect options. Upgrade kis are available forthe original blade server enclosures. Eventually the original enclosure wall be retired and the enhanced enclosure will become the standard. Note: The common iLO port on the rear of the signal backplane new model or if the enhanced backplane enclosure option kit is installed. ‘The enhanced enclosures are offered with and without RDP licenses. licates if the enclosure is a A Rev. 5.41 3-7 HP BladeSystem Administration Blade Server Enclosures Signal routing with enhanced backplane In the enhanced server enclosure, iLO signals are connected to the integrated ilO port on the enhanced management module, rather than to the interconnect in bay B side of the enclosure, as with the original server enclosure. Fibre Channel ‘When using the standard network mezzanine card along with the DualPor Fibre Channel ‘Adapter (2GB) Mezzanine Card kit (mounted directly above the NIC card), you must also use a GbE2 Interconnect Switch Kit (with GbE2 Storage Connectivity Kit) to provide pass- through of the Fibre Channel signals. With the Fibre Channel mezzanine card installed, the NIC signal routing stays the same. Each Fibre Channel port is routed to the interconnect bays. Rev. 5.41 3-8 Blade Server Enclosures HP BladeSystem Administration Enclosure signal routing comparison | rete (dedicated ito) ee eS Enclosure signal routing comparison In addition to the power backplane that distributes DC power differently, the enhanced blade server enclosure provides additional features for network connectivity: * Management module (with an enclosure-wide iLO port) * Signal backplane (iLO signals are now routed to the new management module rather than to the interconnects) Rev. 5.41 3.9 By eC HP BladeSystem Administration Blade Server Enclosures Blade server enclosure compatibility matrix nga den 4GH/2M8 models Blade server enclosure compatibili ‘Compatibility with the original or the enhanced blade server enclosure depends on the blade server model; details are provided in the table on the slide. Note: The table assumes all configurations do not have any power/ processor throttling technology enabled. * The BladeSystem pClass original blade server enclosure — PN 243564-B21 includes the enclosure and a trial RDP license = PN 281404-821 includes the enclosure and eight RDP licenses * The BladeSystem p-Class blade server enclosure with enhanced backplane components — PN 243564-B22 includes the enclosure plus a trial RDP license — PN 281404-B22 includes the enclosure plus eight RDP licenses — PN 380625-822 includes the enclosure plus the HP BladeSystem Management Suite You can deploy both standard and enhanced blade server enclosures in the same rack to share the same power subsystem, but you must be aware of firmware and redundancy requirements: + Firmware requirements — Enhanced blade server enclosures require firmware version 2.03 or later. The management modules on the blade server enclosure and on the power enclosure must have the same firmware version release. Redundancy requirements — The power management firmware (2.03 or later) calculates redundancy based on the type of blade server enclosure. For enhanced blade server enclosures, the power supplies on each side of the power enclosure are required to power the blades on that same side, both the interconnect bays, and the enclosure management module. For standard blade server enclosures, each side of the power enclosure must be able to power up all the blade servers and interconnects in that enclosure Switches and patch panels — The patch panels and GbE CGESM switches are compatible with the original and the enhanced blade server enclosures, depending on which servers are installed. Blades other than the BL20p and BL20p G2 require the enhanced enclosure to access additional NICs. Rev. 5.41 3-10 HP BladeSystem Administration Blade Server Enclosures Upgrading the blade server enclosure * Power down enclosure * Slide all servers and interconnects out | inch eee * Remove ts rica = Management module (hotpluggable) = Signal backplane — Power backplane Upgrading the blade server enclosure The Server Blade Enclosure Upgrade Kit (354 100-821) contains enhanced backplane components; it enables a field upgrade of a blade server enclosure to support ProLiant BL p-Class blades and provides a single, physical iLO port for the enclosure. The kit contains three pieces of hardware: * Power backplane * Signal backplane + Management module Because the brackets are attached with thumbscrews, the upgrade process requires no tools. Alter you have access to the backplane, the upgrade can be completed quickly. Removing existing components * Power down the enclosure. * Slide all servers and interconnects out 1 inch. * Remove: — Management module (hotpluggable) = Signal backplane — Power backplane Installing enhanced components + Install a new management module with a single iLO port, signal backplane, and power backplane, * Push back in the blade servers and interconnects and power on the enclosure. * Update the firmware in the power enclosure. Rev. 5.41 3-11 4 s PST] HP BladeSystem Administration Blade Server Enclosures When upgrading the original blade server enclosure to the enhanced blade server enclosure, the redundant power requirements change. The enhanced blade server enclosure requires two power enclosures for redundant power instead of one, which was the case with the original blade server enclosure. Further details can be found in the HP BladeSystem Power Subsystem module. Important: After replacing the menagement module, you must connect to the new management module using the serial port on the back of the management module and manually enter the serial number of the old management module. Further details can be found in the Accessing the Blade Server Enclosure Management Module lab. Rev. 5.41 3-12 HP BladeSystem Administration Blade Server Enclosures SS Blade server enclosure management module + sa seltcontained microcontroller that communicates with the | iLO management device | * Provides ~A single physical iLO port for all installed blade servers ~ Static IP bay configuration * Reports thermal, power, and protection fuse events to all blade servers in the blade server enclosure ota 21867 Hettinger 2 Blade server enclosure management module Attached to the back of each blade server enclosure is a blade server enclosure management module. It is a selfcontained microcontroller that communicates with the iLO management device ‘on each blade server. The server management module also communicates with the power management module attached to the rear of the power supply enclosure. The blade server enclosure monagement module simplifies setup and management by providing: * Asingle physical iLO port for all installed blade servers that provides up to 16:1 management cabling consolidation * Static IP bay configuration for automated configuration of iLO addresses Note: All management link connector LEDs flash on the server management modules and power management modules when management modules are cabled improperly. The management module reports thermal, power, and protection fuse events to all blade servers in the blade server enclosure. The management module also facilitates power sharing across enclosures and provides asset and inventory information. On enhanced blade server enclosures, the server management module provides a single RI-45 connector for accessing all installed server iLO interfaces, This feature greatly reduces the number of network cables needed for management. The management module is hotpluggable and can easily be replaced without interruption to blade servers or interconnects. Rack rails are common between the blade server enclosure and the power enclosure. If the management module fails (regardless of whether in the original or enhanced blade server enclosure), the blade servers continue operating normally and access to their iLO becomes possible through the front I/O port only. Rev. 5.41 3-13 HP BladeSystem Administration Blade Server Enclosures Blade sleeve +A sleeve protects the internal components of the half-height blades * One sleeve fits o single bay and holds two halheight blades * Sleeves and servers are hot pluggable * Server and sleeve have separate connectors = One or two servers hot plug into a sleeve board in the sleeve ~ Sleeve hot plugs into the enclosure Blade sleeve Asleeve is required to support all half-height (BL30p and BL35p) blade servers in an enclosure. One sleeve fits a single BL p-Class enclosure bay and holds two haltheight blade servers. The sleeve enables 16 dovole-dense blades per enclosure, up to 96 per rack. The sleeve also protects the internal components of the blade servers. Both the sleeves and servers have separate conneciors and are hotpluggable. One or two servers hot plug into a sleeve board and the sleeve hoi plugs into the enclosure. Note: You can change a defective sleeve board easily by removing the sleeve access panel on the rear near the connector. You can also use the access panel fo view the internal connectors, and LED operation of an installed blade. Rev. 5.41 314 HP BladeSystem Administration Blade Server Enclosures @ Summary * Blade server enclosure houses = Up to eight haltheight or up to 16 fult-height blades ~ A pair of redundant network interconnects ~ Supporting components (signal and power backplane, management module) * Original blade server enclosure compared with blade server enclosure with enhanced backplanes * Upgrading to the enhanced blade server enclosure * Blade server sleeve Summary The HP BladeSystem blade server enclosure houses up to eight full-height or up to 16 haltheight blade servers, a pair of redundant network interconnects, and supporting components such as the signal and power backplane and the management module. The enclosure is 6U high and fen 1U bays wide, The original blade server enclosure has a shared power backplane and dedicated iLO ports. The blade server enclosure with enhanced backplanes differs from the original blade server enclosure in these five aspects: + The power backplane is now split, requiring two power enclosures to provide power redundancy for the blade servers in the blade server enclosure. The enhanced blade server enclosure supports halfheight blade servers, which create a greater power consumption (the reason for the split power backplane). A single iLO port services all blade servers in the enclosure. Advanced management capabilities include the Static i? Bay Configuration Utility, which automatically assigns IP addresses to the blade server ilOs. Signal routing now supports four NICs per blade server, two of which are routed to side A and two are routed to side B. Upgrading the original blade server enclosure to the enhanced blade server enclosure involves replacing the management module, the signal backplane, and the power backplane. The replacement requires no tools. The blade server sleeve is required for all haltheight blade servers. It fits in a single bay, holds two blade servers, and is hotpluggable. Rev. 5.41 3-15 HP BladeSystem Administration Blade Server Enclosures Rev. 5.41 3-16 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling ‘Module 4 he64és 0.01 CEE] Rev. 6.21 Al HP BladeSystem Administration HP BladeSystem Class Power and Cooling = Objectives * Discuss the impact of growing power demand and power density * Describe the technologies that supply power to BladeSystem Class systems * Describe Thermal Logic technology * Describe the cooling technologies designed for BladeSystem Class systems Objectives ‘After completing this module, you should be able to: * Discuss the impact of growing power demand and power density * Describe the technologies that supply power to the HP BladeSystem ¢Class systems * Describe Thermal Logic technology * Describe the cooling technologies designed for the HP BladeSystem cClass systems Rev. 6.21 42 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling A limited power budget and growing power demand More performance ‘and density Draws Generates more more power heat era Requires mance, bi) reliability, and cost cooling A limited power budget and growing power demand Aging data centers carry significant power and cooling constraints because of their limited power availability and cooling capacity. As the data centers age, their cooling efficiencies average between 40-50% and an increasing percentage of the data center costs go fo power and cooling. Overcoming these constraints demands active power management. Compounding the problem is the fact that the methods of measuring loads within the data center are outdated. For example, watts per square foot or meter are no longer useful in deciding server deployment strategies. The first step toward power management planning is fo understand the actual power consumption in the data center. This information enables you to: * Maximize performance for a given power and cooling envelope/footprint * Reduce cost to power and plan for availability * Simplify up front planning + Live within the existing data center power budget Rev. 6.21 43 HP BladeSystem Administration HP BladeSystem Class Power and Cooling Modern data center design * Raised floor, forced cir cooling through perforated panels * Power and network wiring may be under floor or overhead + Designed for 5-10 yeorlifecycles Modern data center design The modern data center is based on the following components: * Raised floor * Forced air cooling through perforated panels * Power and network wiring is usually under the floor but can be overhead Most data centers are designed for 5 to 10 year lifecycles, but are being pushed beyond that because of budgetary and other constraints. Given the trend toward increased power consumption and heat density, data centers should be designed for scalability and upgrades. Plans for the data center should be based on a modular design that provides sufficient headroom for increasing power and cooling needs. A modular design provides the fle» to scale capacity in the future when planned and unplanned changes become necessary. HP provides a Site Installation Preparation Utility to assist you in approximating the power and heat load per rack for facilities planning. The Site Installation Preparation Utility is a Microsoft Excel spreadsheet that uses individual platform power calculators and enables you to calculate the full environmental impact of racks with varying configurations and loads. Note: This utility provides calculations for data centers with Proliant p-Class server blades; data pertinent to c-Class server blades will be added in the future. You can download this utility from: hitp://h18001.www1.hp.com/partners/microsoft/utilities/power.html Rev. 6.21 4-4 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Cee Teor) Rev. 6.21 A5 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling Why enclosure power in c-Class? * Match what the customer already has in the data center * Class uses standard power cords ~ IECC19 ~ 164 208V = 3328VA — NEMA L15:30P 24A 3 208V = 8646VA — IEC 309 Spin 16A 3G 230V = 11040VA * pClass (shipped in 2002) ~ BL20p x 8 (1.4GHz Pentium Ill ~ 40W CPU) ~ One L.15-30P line cord could power four fo ive enclosures * Rack based power for cClass would require 60 to 100A three-phase power * ¢Class today and future + 16 x haltheight blades (130W Dempsey CPU) + One L15:30P line cord can power one enclosure | Why enclosure power in c-Class? The BladeSystem Class is designed to match what the customer already has in the data center. Ituses standard power cords: + IECC19- 16 208V = 3328VA + NEMA L15:30p 24A 32 208V = 8646VA + IEC 309 5-pin 16 30 230V = 11040VA Compared with the p-Class (which shipped in 2002), one L15-30p line cord could power four to five enclosures populated with eight BL20p server blades (1.4GHz Intel Pentium Ill - 40W CPU). In the BladeSystem c-Class, one L15-30p line cord can power one enclosure populated with 16 haltheight blades (130W Dempsey CPU). As a point of comparison, if it had been designed for rack-based power, c-Class enclosures would require 60 - 100A three-phase power. eee ‘ Rev. 6.21 46 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Power supplies + 2250W output + Selfcooled * Self monitoring ~ AC input detection = DC output measurement Power supplies Features of the cClass power supplies include: + 2250W output * Self-cooled * Self monitoring - AC input detection ~ DC output measurement The power supplies convert single-phase AC fo 12V DC current and feed the power backplane. ae: Rev. 6.21 AT HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling AC Line Input Modules ss NAVIPN L15-30p £ Intl IEC309 5-Pin 164 3 AC Module C19 164, AC Line Input Modules The AC Line Input Modules are available in single-phase and three-phase configurations and are interchangeable. Rev. 6.21 4-8 HP BladeSystem Administration HP BladeSystern Class Power and Cooling Power Supply Redundant Power Supply Redundant The most basic power configuration has two power supplies. Based on the power supply placement rules, these power supplies would populate Slots 1 and 4. To reach power supply redundancy, you would add another power supply in Slot 2. As long as there were nat more devices in the enclosure than two power supplies could support, then the system would be power supply redundant With the Power Supply Redundant configuration, a minimum of two power supplies is required. Up to six power supplies can be installed in an enclosure. One power supply is always reserved to provide redundancy. In the event of a single power supply failure, the redundant power supply tokes over the load. This N+1 Power Mode configuration is cost-sensitive but provides minimal redundancy. It is most offen selected by small and mediumsized businesses that purchase three or four power supplies ‘and one PDU or have the capability to connect only a single line cord. It could also be selected by customers with high-performance computing applications where redundancy is less important than low cost. ‘This mode is often selected by small and medium-sized businesses (SMBs) who purchase three or four power supplies and one PDU or who can to connect only a single line cord. Total power is total power available less one power supply. A 5+1 configuration = 11250W. 5 g Fs iy Pecans ener Rev. 6.21 49 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling AC Redundant AC Redundant — In N+N AC Redundant power mode, a minimum of two power supplies is required. N power supplies provide power and N provide redundancy, where N can equal 1, 2, or 3. The Onboard Administrator reserves sufficient power so that any number of power supplies from 1 10 3 can fail, and the enclosure will continue to operate at full performance on the remaining line feed. When correctly wired with redundant AC line feeds, the AC Redundant mode ensures that an AC line feed failure will not cause the enclosure to power off. The AC Redundant mode provides full redundancy and is the configuration requested by large enterprise customers because it ensures full performance with one power line feed. Total available power is amount from A or B side with the lesser number of supplies. A 343 configuration = 6750W. Rev. 6.21 4-10 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling Total available power + 2250 are availabe from each power supply + If no power redundancy is configured then, talal pawer avilable is defined os the power available from all supplies instlled — 6 power supplies installed = 135000” 1 Mode is configured, then total power aveilable is defined cs total power ovailable less one power supply 5¢1 configuration = 11250W “ IPNANTAC Redundant is configured, then total power ovailable is the amount from the A or B side withthe lesser number of supplies = 343 configuation = 6750W Total available power Total power available to the enclosure, assuming 2250W are available from each power supply, can be calculated according to the mode that is configured: * If no power redundancy is configured then, total power available is defined as the power available from ell supphias inafolled ~ 6 power supplies installed = 13500W + IfN+1 Power Mode is configured, then total power available is defined as total power available less one power supply — 541 configuration = 11250W * If N+N AC Redundant mode is configured, then total power available is the amount from the A or B side with the lesser number of supplies = 343 configuration = 6750W Rev. 6.21 411 g Fa Ey rs rs cI HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Single @ AC power supply placement Blades inserted Zone 1 Tones 12 Zones 13 Zones 4 Blade (from front of uit) PS=N PS=N1 [PS= NaN + Placement rules are enforced by the Onboard Ad + Note thet the grey areas denote installed devices; blue areas are blanks trator Single @ AC power supply placement Server blades in the enclosure are divided into four zones; power supplies in each zone provide power for blades in that zone, in addition to redundant power for other blades. The placement rules ore enforced by the Onboard Administrator. When the power supplies are placed incorrectly, the Insight Display shows an error. Note: Three @ AC power requires that all six power supplies are installed. Rev. 6.21 412 HP BladeSystem Administration HP BladeSystem c-Class Pewer and Cooling Power supply population + In single-phase configurations, you can use fewer than six power supplies ~Two power supplies: bays 1 and 4 — Three power supplies: bays 1, 2, and 4 ~ Four power supplies: bays 1, 2, 4, and 5 ~ Five power supplies: boys 1, 2, 3, 4, and 5 Power supply population In single-phase configurations, you can use fewer than six power supplies. When using fewer than six power supplies, install units as described: * Two power supplies: bays 1 and 4 * Three power supplies: bays 1, 2, and 4 * Four power supplies: bays 1, 2, 4, and 5 * Five power supplies: bays 1, 2, 3, 4, and 5 Note: The Insight Display panel slides left or right to allow access to power supply bays 3 and 4. Rev. 6.21 413 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling @ Using the 40A 1@ PDU - NA/JPN * B.3KVA PDU - 252663-075 + 2x 50A 1© connections per cClass enclosure + One enclosure contains up to six 2250W power supplies Using the 40A 1@ PDU - NA/JPI A AOA 1@ PDU - NA/JPN configuration requires: * 8.3KVA PDU - 252663-D75 * 250A 10 connections per ¢Class enclosure In this configuration, one enclosure contains up to six 2250W power supplies. Rev. 6.21 4-14 HP BladeSystem c-Class Power and Cocling ] H? BladeSystem Administration Using the 24A 1@ PDU - NA/JPN * SKVA PDU - 252663074 + 2.x30A 1 connections per ¢Class server blade enclosure * Enclosure contains four 2250W power supplies ~ Onboard Adminisrator manages blade power on automaticaly * Enclosure load limited to SKVA -@® Using the 24A 1@ PDU - NA/JPN ‘A.24A 1© PDU - NA/IPN configuration requires: * SKVA PDU - 252663-D74 * 2x 30A 1© connections per c-Class server blade enclosure ‘One enclosure contains four 2250W power supplies. Enclosure load is limited to SKVA. The Onboard Administrator manages power to the server blades and will not power them on until it calculates the total enclosure load. Rev. 6.21 415 mi 5 g 3 2 FS wajsksepey HP BladeSystem Administration HP BladeSystem cClass Power and Cooling Using the 60A 3@ PDU - NA/JPN + $348 PDU- AF9T6A + 17.2 VA N4N redundant power * 2.x 60A 3 connections | + 3.x cCloss server blade enclosures * Each enclosure contains six 2250W power supplies | + Maximum power per enclosure = S.AKVA * 4x BKVA enclosures with four PDUs is also possible Using the 60A 3@ PDU — NA/JPN A.60A 30 PDU - NA//JPN configuration requires: + $348 PDU- AF916A + 17.2 KVANGN redundant power * 2x 60A 3@ connections * Three cClass server blade enclosures Each enclosure contains six 2250W power supplies; maximum power per enclosure is 5.4KVA. Four BKVA enclosures with four PDUs is also possible. Rey. 6.21 4-16 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling Using the 32A 3@ PDU - International + $332 - AF9I7A + 22 KVANsN redundant power + 2.x 32A 30 connections + Four ¢Class server blade enclosures * Each power enclosure contains six 2250W power supplies + Enclosure load less than 5.5KVA * Four BKVA enclosures supported by four PDUs is also possible Using the 32A 3@ PDU — International A 32A 30 PDU - International configuration requires: $332 - AF9I7A 22 KVA NaN redundant power 2 x 32A 3 connections Four c-Class server blade enclosures Each power enclosure contains six 2250W power supplies. Enclosure load less than 5.5KVA. Four 8KVA enclosures supported by four PDUs is also possible. Rev. 6.21 ANT HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling HP Dynamic Power Saver |800W draw system +6 power supplies, 300W each, 75% elficiency * 2400W in for 1800W out + 600W wasted +2 power supplies, SOW each, 91% efficiency + 1978W in for 1800W out *Saving of 422W per enclosure 20 enclosures @ 0.075 per kW/hr can save = $5,545 per year Dynamic Power Saver The HP Dynamic Power Saver takes advantage of the fact that most power supplies operate inefficiently when lightly loaded and more efficiently when heavily loaded. A typical power supply running at 20% load could have an efficiency as low as 60% but at 50%, the load could be 90% efficient. Inthe slide graphic, the top example shows the power demand spread inefficiently across six power supplies. The second example demonstrates that with Dynamic Power Saver, the power load is shifted to two power supplies for more efficient operation. When the Dynamic Power Saver feature is enabled, the total enclosure power consumption is monitored in real-time. As a result, automatic adjustments are tied to changes in demand. Power supplies are placed in a standby condition when the power demand from the server enclosure is low. When power demand increases, the standby power supplies instantaneously deliver the required power. This enables the enclosure to operate at optimum efficiency, with no impact on redundancy. Dynamic Power Saver is supported on the HP 1U Power Enclosure and the ¢-Class enclosure only. It is enabled by an interconnect on the management board. When the power supplies are placed in standby mode, their LEDs flash Rev. 6.21 4-18 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling ee HP ProLiant Power Regulator * Server level, policy-based power eee ce ‘management plications when needed ~ Dynamic and stale control of processor power siaies pepe ~ Leverages recently expozed ftal and ae AMD pila registers ~ Unique operating system independence ~ Control settings from RBSU or itO ~ palate historical dota displayed through iO Advanced interface * Benefits ~ Save on power and cooling costs with ne performance loss. ~ Increase facility compule capacity « Supported on mot caren generation ProLiant servers HP ProLiant Power Regulator ‘The HP Power Regulator for ProLiant feature improves server energy efficiency by giving CPUs full power for applications when they need it and power savings when they do not. It enables Proliant servers with policy-based power management to control CPU power state (CPU frequency and voltage) based on a static setting or automatically based on application demand. Power Regulator can be configured for continuous Static Low Power Mode or for Dynamic Power Savings Mode to automatically adjust power to match application CPU utilization demand. Because Power Regulator resides in the Proliant BIOS, it is independent of the operating system and can be deployed on any supported ProLiant server without waiting for an operating system upgrade. HP has also made deployment ecsy by supporting Power Regulator seitings in the iLO scripting interface. Power Regulator leverages power management support in both AMD and Intel processors. Most current generation processors have exposed pstate registers, allowing frequency and voltage fo change in a static or dynamic mode. Power Regulator allows you to reduce power consumption and generate less data center heat resulting in compounded cost savings. You save first by using less power in racks and secondly by producing less work for air cooling systems. These savings do not necessarily result in loss of system performance as we will see ina minute. These factors can work to save on operational expenses and enable greater density in the data center environment. ProLiant BIOS updates are available to support Power Regulator on any Xeon-based Proliant dualprocessor G4/G4p or multiprocessor G3 server. BIOS support for Opteron based servers is scheduled for availability in early July 2006. Rev. 6.21 419 HEP BladeSyslem Administration HP BladeSystem cClass Power ond Cooling ProLiant Power Regulator —- Advanced power management * Monitor and manage individual and groups of servers by physical or lagical location (power domain) * Monitor vital power information ~Power usage in watts ~ BTU/hr output — Ambient air temperature * Policy-based management — Power cap policy: Se! maximum BTUs/hr or wattage threshold {capped on a server by server basis} = Temporary conservation policy: Set time of day fo drop lower priority servers into lower power state ~ Severe facilities issue: Drop lower priority servers into lower power state when a severe facilities issue occurs ~ Energy efficiency policy: Set all servers in power domain to dynamic power regulatin ProLiant Power Regulator — Advanced power management * Monitor and manage individual and groups of servers by physical or logical location (power domain) * Monitor vital power information — Power usage in watts = BTU/hr output — Ambient air temperature * Policy-based management = Power cap policy: Set maximum BTUs/hr or wattage threshold (capped on a server by server basis) — Temporary conservation policy: Set time of day to drop selected lower priority servers into lower power state — Severe facilities issue: Drop lower priority servers into lower power state when a severe facilities issue occurs = Energy efficiency policy: Set ail servers in power domain to dynamic power regulating Rev. 6.21 4-20 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling Rev. 6.21 4-21 HP BladeSystem Administration HP BladeSystem Class Power and Cooling ; ] Thermal Logic technology Efficient components Efficient systems 1@ 1 Prolion! Power Regulator HP Modular Cooling A AMOR Enterprise chipsets 4 HP 10000 G2 rocks 2 Small dives 4 Iniegrty Viduol Server 4. HP PDU Management aitehewiane Environment Mode supplies 4 Proliant Essentials Dato Center Services 1 Efcient power Thermal Logic technology HP Thermal Logic technology is a holistic approach to meeting each customer’s power budget. It increases efficiency in power and cooling in the following areas: + Processor level ~ Processor choice — AMD, Intel, low voltage, dual core Intelligent power management — Managed power prevents system power-on if insufficient power available = Reporting of available and consumed power Proliant Power Regulator — ilO-controlled speed stepping — PowerNow Application balancing — Virtualization and automation — Proliant Essentials Dynamic Power Saver Power load shifting for maximum efficiency and reliability Instant thermal monitoring — Realtime power, heat, and cooling data Availability — Dual, independent power subsystems from branch circuit to server blades — N#N redundancy (sustain multiple power supply failures) Fewer power supplies reduces likelihood of failure Flexibility and ease of scalability — Fewer power supplies per rack for lower inrush and current leakage Investment protection — Power headroom to support future generations of server blades Rev. 6.21 4-22 HP BladeSystem Administration HP BladeSystem cCloss Power and Coaling Benefits of the technology * Simplifies up front planning * Enables management based upon power ~ Measures your actual power consumption * Maximizes the performance for given power and cooling envelope/footprini * Enables near real-time environmental awareness ~ Enables you fo respond quickly to changing data center capal * Extends the life of the data center — Forestalls expensive power and cooling upgrades > 2 Benefits of the technology Benefits of Thermal Logic technology include: * Simplifies up front planning * Enables management based upon power — Measures your actual power consumption * Maximizes the performance for given power and cooling envelope/footprint * Enables near realtime environmental awareness ~ Enables you to respond quickly to changing data center capabilities * Extends the life of the data center — Forestalls expensive power and cooling upgrades ey Fa 8 2 é EB Rev, 6.21 AB HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Thermal Logic management Docks eniy performance, power, and cooling capacity to maximize power budget Gromer Power usage and distribution Airflow and acoustics Performonce Worklood J Rocklevel BTU/hr | fnclosure inflow ond outflow temp J Actual power uilization L— Max power available Thermal Logic management Each customer has a unique thermal budget, with a finite capacity for power and cooling. Most customers find it difficult to understand their budget and the exact impact their systems have on the budget. Thermal Logic makes that knowledge and the tools to optimize the environment accessible. HP Thermal Logic offers an instant view of power usage and temperature at the component, enclosure, and rack level. You can use this information to optimize performance, power, and cooling capacity to maximize power budget and ensure ovailabiliy. Thermal Logic displays the following values in realtime: * Power usage and distribution + Fon speeds (rpm) * Performance * Workload Thermal Logic management dynamically adapts thermal controls to optimize performance, power, and cooling capacity to maximize power budget and ensure availability Rev. 6.21 4-24 HP BladeSystem Administration HP BladeSystem cClass Power and Cooling Thermal Logic components parsec — YWetkload Ba erchitecwe _ Balancing Processor k throttling | Smet ower ; » and Coating - Pooled witicore ‘Single and three procestons phase choice Thermal Logic components ‘The core of the Thermal Logic environment is a control algorithm that is designed to optimize any configuration based on customer parameters for airflow, acoustics, power, and performance. It provides a realtime view of power usage and temperature at the server, enclosure and rack level. This offers customers the intelligence to adapt, redundancy to protect, and scalability to grow. ‘The components that comprise Thermal Logic automatically adjust and shift power load and workload and thermal controls to maximize performance and power and cooling capacity for each unique environment. These components include: * Data Center Planning Services — Smart Cooling Consultancy * Rack cooling technologies — Monitored power distribution unit (PDU) = Data center focused infrastructure solution ~ Modular Cooling System ‘Active Cool fans that are supported by: - An air distribution manifold — Backflow prohibitors on all fons — Shut off doors on all servers PARSEC architecture = Parallel, redundant, and scalable airflow design Power workload balancing — Virtualizetion to maxi Pooled power — Enables installation of just the right omount of power — N#N power redundancy — Three-phase power eliminates need for PDUs or bundles of power cables ze performance/watt Rev. 6.21 4-25 HP BladeSystem Administration HP BladeSystem ¢Class Power and Cooling a1 PARSEC architecture 7000 enclosure ‘Active Cool fans + Adoptive flow for maximum power elliciency, air movement, and acoustics | Interconneat bays | | + The interconnects are powered ond cooled 9s part of the PARSEC architecture Onboard Administrator + Remote administration view Robusi, mulenclosure contol fh Power management PARSEC architecture Single- or three-phase enclosures and + Porallel, redundant ond NiN or NT redundancy scalable cooling and Best performance per wott airflow design ri nsisnst » PARSEC architecture PARSEC architecture optimizes thermal design to support all customer configurations from 1 to 26 servers, with one to 10 fans. The cClass enclosure features a relatively airtight manifold. The servers seal into the front section when in use; doors seal off when servers are not in use. The rear section has back flow preventers that seal when a fan does not rotate or is not installed. The middle section wraps around the complex power and signal distribution midplanes to ensure that air is properly metered from the 10 parallel fans to the 26 parallel servers when all are installed or the minimums are installed. These are three large snap-together plastic, metal, and gasket subassemblies. Cooling is managed by the Thermal Logic technology, which features Active Cool fans. These fans provide adaptive flow for maximum power efficiency, air movement, and acoustics. The BladeSystem Class cooling system features improved cooling efficiency and reduced power consumption compared with the p-Class and the competition. Active Cool fans provide an adaptive flow for maximum power efficiency, air movement, and acoustics. The BladeSystem cClass cooling system features improved cooling efficiency and reduced power consumption compared with the pClass. The PARSEC architecture is designed to draw air through the interconnect bays. This allows the interconnects to be smaller and less complex. The power supplies are designed to be highly efficient and self-cooling. Single- or three-phase enclosures and N+N or N+1 redundancy yield the best performance per watt. Rev. 6.21 4-26 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Thermal Logic management Single enclosure - Onboard Administrator Thermal Logic management — Single enclosure Thermal Logic monitors actual power and cooling in the data center. It provides an instant view of power usage and temperature at a server, enclosure, and rack level. When monitoring a single enclosure, you can view the following data in realtime: * Enclosure temperatures * Actual power used + Maximum power available to the enclosure Pd be 5 atthe Rev. 6.21 427 HP BladeSystem Administration Thermal Logic Management - In addition to the information you can view from a single enclosure view, a multi-enclosure view Thermal Logic Management Multi-enclosure - Onboard Administrator Ron) cee Crone Crete es en eens ‘adds the rack BTU requirements information. Rev. 6.21 4-28 HP BladeSystem Class Power and Cooling HP BladeSystem Class Power and Cooling HP BladeSystem Administration a 9 g 8 4-29 Rev. 6.21 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling —_ 7000 enclosure design challenges * Challenges faced by the Class design engineers ~ Small operture in the backplane assembly + To got sufficient aie from blades requires high pressure | operture open | ~ BL460c with duolcore Xeon requires ~30CFM fo cool * Requires high airflow ~ Up to 16 Dempsey halfheight blodes per chassis : 3 vohomes 7000 power signal Requires large air volumes to fot clgmecheneper pty * Thermal Logic and Active Cool fans are the answers 7000 enclosure design challenges Challenges faced by the oClass design engineers included: + Small apertures in the backplane assembly; to get sufficient air from blades requires high pressure * The dual-core Xeon processor in the BL460C server blade requires ~30CFM to cool and therefore requires high airflow + Up to 16 Dempsey haltheight blades per chassis require large air volumes to be moved Thermal Logic and Active Cool fans are the answers to these challenges. Rev. 6.21 4-30 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling BladeSystem c-Class solutions Active Cool fan technology * High volume * High pressure * Best in class acoustics * Best in class power consumption A new cooling architecture * A bladed system — Beyond just a server blade * Combines the best of both worlds. ~ Provides centralized cooling ~ Scales with growth BladeSystem c-Class solutions The BladeSystem ¢Class introduces Active Cool fans, a new fan technology: * High volume * High pressure + Best in class acoustics * Best in closs power consumption A new cooling architecture: * A bladed system — Beyond just a server blade * Combines the best of both worlds — Provides centralized cooling = Scales with growth Rev. 6.21 4-31 HP BladeSystem Administration HP BladeSystem ¢-Class Power and Cooling HP Active Cool Fans [LD color [Fan states [Sell green [Fon working nally [Seid omber [Fan fore Check sigh Delay for information Binking amber HP Active Cool Fans Custom fan design delivers better than indusiry performance: igh air flow + High pressure * Best in class reliability * Superior acoustics across entire operating range Cool facts: * Four Active Cool fans could cool an IBM BladeCenter with N+1 redundancy + One Active Cool fan could cool five ProLiant DL360 G4 servers Rev. 6.21 4-32 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling BladeSystem c7000 enclosure airflow eet a ploni BladeSystem c7000 enclosure airflow Thermal Logic uses a control algorithm to optimize for any configuration based on the following customer parameters: * Airflow + Acoustics + Power * Performance Rev. 6.21 4-33 2 PI Fe: rs a HP BladeSystem Administration HP BladeSystem Class Power and Cooling Selfsealing enclosure Server Fan louvers gates avfomatcally open lever when when fons installed insted Door opens to sen glow sifow fon loovers through server ‘auiomatically close di ‘when fon ie removed Self-sealing enclosure Fan louvers automatically open when fan is installed and automatically close when the fan is removed. When a fan is installed into the enclosure, the server blade in the enclosure activates a lever that opens a door on the fan assembly to allow air to flow through the server blade. Rev. 6.21 4-34 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Fan zones Fan zones Server blades in the enclosure are grouped into four zones. As with power supply placement, fans in each of the four zones in the enclosure provide cooling for blades in that zone, in addition to redundant cooling for other blades. Rev. 6.21 4-35 Pa eet TT HP BladeSystem Administration HP BladeSystem Class Power and Cooling 7 Fan location rules (Firmware Release 40) Zones One] One ond fwo A Al Server blades | Upto 2 Upto8 Upto 16 | Populate ony bays Number lof fons 4 é ie 10 Fan location Bays Bays Bays Populate al 4andS | 3.45, | 1.245, bays gond io | & 910 | 679,10 * Always install the first fultheight server blade in device bay 1 * Placement rules are enforced by the Onboard Administrator Fan location rules (Firmware Release 40) Important! If the number of fans in the chassis is 4, 6, 8, or 10, the fans must be placed in in accordance with the fan rules. If they are not in those exact locations, the thermal subsystem will be degraded and no newly inserted server will be allowed to power up. * Four fan rule — Fans in bays 4, 5, 9, } — The 4 fan rule allows only 2 servers to power up in device bays 1, 2, 9, or 10 * Six fan rule — Fans in bays 3, 4, 5, 8, 9, 10 — The six fan rule allows up to 8 servers to power up in device bays 1, 2, 3, 4, 9, 10, 11, of 12. — Ifthere is a server powered up in device bays 3, 4, 11, or 12 and a fan is removed, the thermal subsystem remains in the six fan rule and does not degrade to the four fan rule. — If there is a server powered up in device bays 1, 2, 9, or 10 and a fan is removed, the thermal subsystem degrades to the four fan rule. ight fan rule - Fans in bays 1, 2, 4, 5, 6, 7,9, 10 — The eight fan rule allows 16 servers to power up. — If there is a server powered up in device bays 5, 6, 7, 8, 13, 14, 15, or 16 and a fan is removed, the thermal subsystem remains in the eight fan rule and does not degrade to the six fan rule. ‘Continued on the next page Rev. 6.21 4-36 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Fan location rules (Firmware Release 40) * Ten fan rule ~ Fans in all bays = Optimum power usage = Optimum acoustics — Ifa fan is removed, the thermal subsystem degrades to the eight fan rule. Important! If you install extra fans in bays in such @ way that they do not meet the next higher fan rule, those fans are marked with location errors. Rev. 6.21 437 HP BladeSystem Administration HP BladeSystem c-Class Power and Cooling Cooling multiple enclosures 24KW 32KW KW. 1ekW (Operate wih sondard forced air cooling HP Modular Cooling System Cooling multiple enclosures Smaller data centers can operate with standard forced air cooling. The MCS is designed for large and medium size enterprises with advanced data center environments. It supports a full load of server blades, which, at their maximum configuration, require a power load of 30KW. The MCS unit will provide over 2700 CFM of cool air distributed along the full height of the rack Rev. 6.21 438 HP BladeSystem Administration HP BladeSystem ¢-Class Power and Cooling o HP Modular Cooling System Top View HP Modular Cooling System The HP Modular Cooling System (MCS) is the innovative selfcooled rack for high-density deployments in the data center. New cooling technology makes possible the deployment of up to 30KW in a single rack, bringing hardware densities and power consumption levels that have been difficult, if not impossible, to cool previously. The MCS is designed to complement the existing conventional data center cooling by adding computing power without adding to the current heat load in the datacenter. In addition, by packing three times the KW capacity of a standard rack, the MCS will extend considerably the life of the data center. The HP Modular Cooling System features: + Hot swappable components — Fai operating allowing for great flexi Environmental control — The MCS features full environmental control of the cooled air temperature being placed in front of the servers. The heat exchanger unit features safety quick disconnect valves that prevent accidental chilled water leakage. A water leak detector ships standard with the MCS. Alarms can be sent using SNMP traps to HP SIM. 42U height, industry-standard rack — Maintains the height of your datacenter rows and does not change the way your servers are racked. Because it is half the width of a standard rack, the MCS also maintains data center tiling. Additional features include: * Uniform air flow across the front of the servers ind heat exchangers can be replaced while the unit is ty in servicing the MCS. * Redundant power source * Adjustable temperature set point * Polycarbonate front door reduces noise by up to 50% * Server blade deployment unaffected by design Rev, 6.21 439 HP BladeSystem Administration HP BladeSystem ¢-Class Power and Cooling Lab Flea vain, Lab activity — Using the HP Proliant BL p-Class Power Calculator Objectives ‘After completing this lab, you should be able to: Access the HP ProLiant BL p-Class Power Calculator Use the Power Calculator graphical user interface (GUI) Configure the Proliant BL pClass blade server enclosures Configure the blade servers Configure the rack-centralized power subsystem Oblain the equipment list summary Explain HP 1U Power Enclosure powering Reset the Power Calculator Determine the maximum rack density Rev. 6.21 4-40 HP BladeSystem Administration HP BladeSystem Class Power and Cooling en = a 2 2 Cena) Rev. 6.21 441 HP BladeSystem Administration HP BladeSystem Class Power and Cooling Rev. 6.21 4-42 HP BladeSystem Administration Power Subsystem HP. Blad Module 5: he646s 0.01 Fd g F Es rs es Rev. 5.41 5-1 HP BladeSystem Administration Power Subsystem Objectives * Identify the components of an HP BladeSystem power subsystem * Describe the power distribution options available in a BladeSystem * Discuss the power configurations possible within the BladeSystem infrastructure | * Identify the power distribution units used ina BladeSystem | Objectives After completing this module, you should be able to: * Identify the components of an HP BladeSystem power subsystem * Describe the power distribution options available in a BladeSystem * Discuss the power configurations possible within the BladeSystem infrastructure * Identify the power distribution units (PDUs) used in a BladeSystem Rev. 5.41 5-2 HP BladeSystem Administration Power Subsystem Introduction + Rack-centralized power subsystem ~ Provides redundant, scalable power to all blade servers * Key benefits ~ Eliminates the cost and cables of PDUs — Provides redundant power for current and future generation ProLiant BL p-Class blade servers Introduction The HP BladeSystem uses a unique, rack-centralized power subsystem that provides redundant, scalable power to all blade servers in a rack. The two key benefits of this centralized design are that i * Eliminates the cost and cables of PDUs * Provides redundant power for current and future generation Proliant BL p-Closs blade servers ‘The power subsystem is external to and shared by all the server enclosures in a rack. At the core of the power infrastructure is a 3U power enclosure that holds up to six hotplug power supplies. ‘Additional power supplies and power enclosures can be added to a system for redundancy. HP designed management modules for the blade server enclosures that monitor power and overall enclosure health. In addition, blade server management modules are connected to a power management module in each power enclosure that monitors power available from the power enclosure and from each power supply it contains. To further protect the data center, HP strongly recommends establishing DC redundancy and ‘surge protection through a UPS. Rev. 5.41 5-3 =z 5 be 5 2 prey HP BladeSystem Administration Power Subsystem Rev. 5.41 5-4 HP BladeSystem Administration Power Subsystem Power subsystem components * Power input Ree View Lower hall = Singlepphase ~ Threephase -pc * Hotpluggable power supplies —Fronbaccessible hotplug — Redundant configurations * Power enclosure * Hotplug power management module Power subsystem components The main components of the ProLiant BL p-Class power subsystem are: * Power input — Single-phase — 208V AC to 260V AC — Threephase — 208V AC to 260V AC; supports more blades than single phase - Direct current (DC) — -48V DC common building power, if available * Hotplug redundant power supplies — Convert AC input to -48V DC power for all blade servers in the rack. The power supplies support redundant AC inputs for high availability. The power supplies are frontaccessible hotplug and can be installed in various redundant configurations. The same power supplies can be used in either the single-phase or three- phase power enclosure. * Power enclosure — Holds the power supplies; ships in both single-phase and three-phase power enclosure models. AC input redundancy is available with redundant enclosures. Power enclosures must be placed below the server enclosures in the rack. * Hotplug power management module — Monitors the power subsystem components and regulates the power-up sequence’ of newly installed blades and interconnect switches. Power management modules communicate management information, such as: — Blade server and interconnect locations = Power supply loading — Health status Pa Ey 3 Rev. 5.41 55 HP BladeSystem Administration Power Subsystem Single- and three-phase powering (oa) North America International Single-phase e-phase RERATBSOR 208V, 309, 60He SISEEREC 00, 2202400, 304, Sore Three-phase Tree-phare SESE. -remarissoe, asov,200, BBHRIEEI oe seo4isy, 208, 20, Some Single- and three-phase powering Single-phase power is generally transported over very high-voltage transmission lines from a nearby power company substation to a local step-down transformer on or near the building for the blade server infrastructure powering The step-down transformer’s secondary windings, the side that feeds the building, are wound in three separate windings or phases, each producing (approximately) 220/240V AC phase to phase. There are three pins on a single-phase plug—two are the phase-to-phase 220V AC and the third (keyed) pin is neutral (ground). The third pin is used as a common or neutral polarity and if the equipment also requires 110V AC — phase-orneutral (sometimes called phase‘o-ground). Phase- to-ground produces 110V AC and phase-to-phase produces 220V AC. One of these single 220V AC phases is used to power the single-phase enclosure. Three;phase power With three-phase power, all three of the separate phases of 220V AC are used for powering three power supplies (side A). For redundancy, another connector and power cable is used to power the remaining three power supplies in the enclosure (side B). Important! One of the connectors (the keyed one) is the neutral connector, so the plugs must be keyed so that the conductors are wired properly. The three-phase plugs have more than three pins. The extra pins are used for the additional phases. Note: Only three connectors plus the ground in the North American plug are needed for three phase. The three phases are wired across each of the power connectors and the negative side is neutral. The international red plugs contain an additional ground pin, as well as a neutral pin, but operate in the same manner. Rev. 5.41 5-6 HP BladeSystem Administration Power Subsystem Power supplies + Convert 200-240V AC to ~48V DC * Two options — 3U power enclosure with four or six 3U power supplies powering the blade enclosures through bus bars — 1U power enclosure with up to six 1U power supplies that power a single blade enclosure Power supply features * Are designed fo power all current and future blade servers and interconnect options * Support high-availability power configurations — Hotplug redundant power supplies = Redundant AC inputs —Hot-plug power management module Power supplies HP BladeSystem power supplies convert 200-240V AC to ~48V DC to power blade servers and interconnect switches. The power supply options available for the BladeSystem are: + 3U Power Enclosure with either four or six 3U power supplies powering the blade enclosures through a common high-current delivery system called bus bors * 1U Power Enclosure with up to six 1U power supplies that power a single blade enclosure Power supply features HP BladeSystem power supplies are designed to power all current and future HP BladeSystem p- Class blade servers and interconnect options, including blade server enclosures with enhanced backplane components. The power supplies support the following high-availability power configurations: * Hotplug redundant power supplies + Redundant AC inputs * Hotplug power management module Rev. 5.47 5.7 Py FE c be 5 HP BladeSystem Administration Power Subsystem 3U Power Enclosure — Three-phase Le noteraromesuppiee ——t * Supports the standard and enhanced blade server enclosures * Converts 200/240V AC to -48V DC * Accessibly located at the boitom or center of the rack * Supports power supply and AC input redundancy * Includes an intelligent power management module * Supports up to six hotpluggable power supplies 3U Power Enclosure — Three-phase HP recommends using three-phase power to maximize the lifecycle of the HP enhanced server enclosure. Only three-phase power can support the maximum number of doubledense blades in all configurations. The three-phase power enclosure: Fits the original and enhanced blade server enclosures—full width, 3U height Converts 200/240V AC to -48V DC = AC enters the rack into the power subsystem = -48V DC leaves the power subsystem and feeds the servers in the 6U enclosures through a cabling or bus bar system Holds up to six hotpiug power supplies Provides a maximum power rating of 9kW per bus bar Is accessibly located at the bottom or center of the rack Supports power supply and AC input redundancy Includes an intelligent power management module Rey. 5.41 5-8 HP BladeSystem Administration Power Subsystem 3U Power Enclosure — Single-phase Four hotplug power supplies * Bays 3 and 6 are not used * Power supply blanks are installed in bays 3 and 6 and are permanently secured with a screw * Single-phase powering only supports half a rack * HP does not recommend this configuration = There is insufficient room for blade enclosure expansion in a full height rack «2 Currently being shipped for spe 3U Power Enclosure — Single-phase In single-phase power enclosures, bays 3 and 6 are not used, and power supply blanks are permanently secured in their place with screws. Single-phase enclosures can only power up to one half a rack, or three blade server enclosures—half the capability of three-phase power enclosures. This allows litle room for expansion to a fulkheight rack, especially with the release of the Proliant BL30p and BL35p blade servers. The single-phase power enclosure: | applications * Holds up fo four power supplies * Provides a maximum power rating of 6kW per bus bar Although HP does not recommend singlephase power, this configuration is currently shipped for special applications and smaller configurations. Combining single-phase and three-phase power enclosures You can use bath single-phase and three-phase power supply enclosures in a single power zone of a rack. If multiple power enclosures are combined in a rack in the same power zone, the power management firmware selects one of the power management modules to manage all the power redundancy calculations for that power zone. The power firmware selects which is the “master” power management module for a specific power zone. The master power management module keeps track of all power-related calculations for both power enclosures, such as amount of power available and redundancy requirements, Rev. 5.41 5.9 HP BladeSystem Administration Power Subsystem 3U Power Enclosure — Rear view 3U Power Enclosure — Rear view The single-phase and three-phase power enclosures share the same input and output structure. The power enclosures, bus bars, and cabling are right/left redundant. The right side, or Side A, is a mirror image of the left side, or Side B, when facing the rear of the rack. Each side has its own AC feed and DC distribution path. Half of the power supplies in the 3U power enclosure provide power through the Side B bus bar and the other power supplies in the power enclosure provide power through the Side A bus bar. Note: The ground strap is not required in EMEA (Europe, Middle East, and Asia) because the plugs themselves are grounded. Rev. 5.41 5-10 HP BladeSystem Administration Power Subsystem 1U Power Enclosure * Brings the benefits of blades to remote sites and small deployments * Reduces power distribution costs and provides power capacity for current and future blade servers * Includes power supplies that are centralized to provide scalable, redundant power for a single enclosure of blades ‘or an entire rack 0 1U Power Enclosure The BladeSystem 1U Power Enclosure is designed for 1:1 infrastructure solutions with server enclosures; both original and enhanced server enclosures are supported, with no changes to existing server enclosures. The BladeSystem 1U Power Enclosure ships with two integrated coupler cables for attaching to ‘one server enclosure. These cables are similar to bus bar cables, but they eliminate the need for bus bars. ‘The BladeSystem 1U Power Enclosure: * Provides a high-availability power subsystem for a single enclosure of blade servers * Brings the benefits of blades to remote sites and small deployments * Reduces power distribution costs and provides power capacity for current and future blade servers *+ Houses up to six 1U hotpluggable 2000W power supplies * Includes power supplies that are centralized to provide scalable, redundant power for a single enclosure of blades or an entire rack Specifications for the BladeSystem 1U Power Enclosure include: * Total power output per chassis — 7000W with AC line cord and power supply redundancy + AC line cord = One C19 (208V - 230V and 16A) plug per power supply — Total of six C19 plugs per 1U Power Enclosure * Internal bus connection within the power enclosure provides DC redundancy Rev. 5.4] 5-11 HP BladeSystem Administration Power Subsystem 1U Power Enclosure — Features and benefits > Opiimized for single server ancksire power connectivity — 2000W power supplies = 7000 total watts per chassis = DC and AC redundancy froma single enclosure = 343 configuration = Six C19 cords connected Front view with six power supplies 1U Power Enclosure — Features and benefits The BladeSystem 1U Power Enclosure is optimized for single server enclosure power connectivity; it provides redundant power for a single enclosure of blades from standard single- phase AC inputs. It houses up to six hotpluggable power supplies and is supported for use with all Proliant blade servers. The 1U Power Enclosure features: 2000 watt power supplies 7000 total watts per chassis DC and AC redundancy from a single enclosure + 343 configuration * Six C19 cords connected Benefits of the 1U Power Enclosure includ *+ Provides protection if one power supy * Powers up one blade server enclosure, fully populated with Proliant BL pClass blade servers and interconnects + Allows users to select the following enclosures based on their current blade server configuration: — 1U power enclosure with six power supplies (PN 378284821) — 1U power enclosure with two power supplies (PN 380314821) Note: Use four power supplies to power an enclosure filled with fullheight blade servers; use six power supplies to power an enclosure filled with haltheight blade servers. Rev. 5.41 5-12 HP BladeSystem Administration Power Subsystem 1U Power Enclosure — Rear view 1U Power Enclosure — Rear view The 1U Power Enclosure provides hotpluggable, redundant power for a single enclosure of blades from standard single-phase AC inputs. 220/240V AC connectors for each single-phase power cord provide AC power for each power supply. Although each cable is rated to handle 75A, you cannot continuously draw that much current. A single blade enclosure is rated to support a maximum of 6000W (3000W per side and thus 3000W per cable). At the lowest possible power supply voltage of -48.8V, the maximum rated sustained output current would be 123A, or 61.5A per cable. From the rear of the rack, the Side A bus is on the right side of the power enclosure, and the Side B bus is a mirror image on the left side. Each side has its own AC feed and DC distribution path. Half of the power supplies in the 1U power enclosure provide power through the Side B bus and the other power supplies in the enclosure provide power through the Side A bus. Rev. 5.41 5-13 HP BladeSystem Administration Power Subsystem i 1U Power Enclosure block diagram at a2 act ue 1U Power Enclosure block diagram The 1U Power Enclosure has special features that are different from the older 3U power enclosure. In the 1U enclosure, an electronic switch in the middle connects the -48V DC outputs ‘on side A to the -48V DC outputs on Side B. The 1U Power Enclosure connects any power supply on either Side A or Side B to power both Side A and Side B buses in the server enclosure, provided the 75A bidirectional electronic switch, shown in the graphic, is closed. With @ 1U Power Enclosure with AC power applied to each connector, when the first hotplug power supply is installed in any bay on any side, that side of the 1U Power Enclosure immediately powers on. About three seconds later, the electronic switch closes automatically and both side A and B are powered through the electronic switch. The only time the electronic bidirectional switch in the 1U Power Enclosure “opens” is if there is short in one of the blades or blade enclosure. In this case, the switch opens and immediately separates Side A from Side B. Power is removed from the half of the enclosure that has the short, but the other side continues normal operation (provided there are operating power supplies installed on that side}, When the shor is cleared, the switch closes, and both sides of the blade enclosure have power again. While the power is being removed from the defective bus, no power is being supplied to the ‘associated (enhanced) blade enclosure bus. Because the enhanced blade enclosure buses are separate buses, the blades continue operating normally on the (still) operating bus. Rev. 5.41 5-14 HP BladeSystem Administration Power Subsystem 1U Power Supply operation * Microprocessor-controlled and hotpluggable * One power supply can operate the entire power enclosure (but must be able te cory the blade enclosure load) + Two power supplies, one in bus A and the other in bus B, provide complete redundancy + Six power supplies provide full redundancy under all normal operating conditions + For complete redundancy, bus A AC input power must be supplied from ai different AC building power source phase than bus B me anna Secon a 1U Power Supply operation To service the power to the blade enclosure property, the power supplies on both buses (A and B) of the 1U Power Enclosure are microprocessor-conirolled. One power supply can operate the entire power enclosure (but must be able to carry the blade enclosure load). ‘As soon as one power supply is powered on, the bus on that side (either A or B) is immediately powered, Three seconds later, the efuse (the e-fuse is similar to a relay and closes to allow current to flow) closes; both A and B buses are powered up regardless of how many power supplies are installed. The only time the e-fuse opens is if here is an overload or a short on the A or B bus, or if all six power supplies lose power (causing a system resel). If 100 much load exists for the number of power supplies present, the supplies automatically detect the maximum current condition and shut down (latch off). If there are fewer than three power supplies installed on a bus, the blade enclosure might not be able to carry the load if the opposite bus fails. In this case, the remaining power supplies try to operate the entire enclosure. If the load is too great, the supplies will exceed their maximum current limit, shut down, and latch off. The amber (fault) LED on the supply will illuminate. In this case each power supply must be reset by cycling the AC power or by hot plugging each latched power supply again. After power is restored on all power supplies, power up the blade servers. Note: Two power supplies, one in bus A and the other in bus B, provide complete redundancy; six power supplies provide full redundancy under all normal operating conditions. For complete redundancy, bus A AC input power must be supplied from a different AC building power source phase than bus B. Rev. 5.41 5-15 HP BladeSystem Administration Power Subsystem 1U Power Enclosure deployment matrix 1U Power Enclosure deployment matrix The valves in the slide graphic represent an approximate guide only. Always use the power calculator for a specific configuration. You can download the ProLiant power calculators from: hitp://h30099.www3.hp.com/configurator/cale/BL%20p-Class.xls Note: *Some BL35p dual-core configurations might require six power supplies to support 16 servers. Rev, 5.41 5-16 HP BladeSystem Administration Power Subsystem 1U Power Enclosure Dynamic Power Saver * Allows power monitoring in real time * Saves power requirements + Power supplies are aulomaticaly ploced in standby condition when the power demand is low + Power is automatically increased as demand for power increases | + Must be activated 1U Power Enclosure Dynamic Power Saver The Dynamic Power Saver is a feature of the HP BladeSystem p-Class 1U Power Supply Enclosure. The Dynamic Power Saver configures the available power supplies to operate at maximum efficiency by using the fact that most power supplies operate inefficiently when lightly loaded and more efficiently when heavily loaded. ‘A typical power supply running at 20% load could have an efficiency factor as low as 60%. On the other hand, a 50% load could be 90% efficient, providing a significant saving in power consumption. When the Dynamic Power Saver feature is activated, the total enclosure power consumption is monitored in real time. Power supplies are placed in a standby condition when the power demand from the server enclosure is low. When power demand increases, the standby power supplies instantaneously deliver the required power. This enables the enclosure to operate at optimum efficiency. A minimum of two power supplies are always active; the maximum load that can be reached on any power supply is 50%. After the 50% loading is exceeded, another two power supplies are activated and the load is shared across all four power supplies, ensuring that redundancy is maintained at all times. Note: Dynamic power saver is nat enabled when the 1U Power Enclosure is shipped and must be activated. ey 2 Py is Becca tl Rey. 5.41 5-17 HP BladeSystem Administration Power Subsystem 1U Power Supply LEDs 07h Dp are w 1U Power Supply LEDs + Note 1: This condition occurs when the Dynamic Power Saver is enabled (the Dynamic Power Saver configures the available power supplies to operate at maximum efficiency). * Note 2: To reset the power supply fault condition warning, perform one of the following actions: = Remove and reconnect AC power from the power enclosure. — Remove and replace the power supply as a hotplug procedure if more than one power supply is delivering power to the server enclosure. Rev. 5.41 5-18 HP BladeSystem Administration Power Subsystem Power management module * Facilitates communication of management information ‘+ Monitors the operation of the power supplies and power enclosure SE ene re eo cee = Regulates the powerup sequence of newly installed blade servers ond interconnect switches ~ Stores system fauils ond other data ~ Repors thermal, power, and protection events = Communicates server location, power supply budget, and status information + Connects to the enclosures above and below it with RI45 cables * Provides a power enclosure service port Power management module ‘The power management module, mounted on the rear of the power enclosure, performs the following functions: * Facilitates communication of management information, such as: — Blade server and interconnect locations — Power supply loading and health status * Monitors the operation of the power supplies and power enclosure — Regulates the voltages fo match all voltages in other power enclosures — Regulates the power-up sequence of newly installed blade servers and interconnect switches — Stores system faults and other data that can be viewed ~ Reports thermal, power, and protection events by communicating with the iLO applicationspecific integrated circuit (ASIC) — Communicates blade location, power supply budget, and status information * Connects fo the enclosures above and below it with RI-45 cables * Provides a power enclosure service port for troubleshooting and diagnostics The power management module is hotpluggable; it can be removed and replaced at any time without loss of service. It is cabled to the blade server management module; operation is independent of the operation of the blade servers and interconnect switches. The 1U and the 3U power management modules are similar, but are not interchangeable. The 1U power management module has power saving circuitry and options that are different from the 3U power management module. However, the 1U power management module is compatible with the 3U power management modules. Although they cannot be substituted for each other, both can be operated in the same rack. Important! When deploying the HP BladeSystem 1U power enclosure and its attached server enclosure into an existing installation, you must upgrade all management module firmware on all ‘enclosures to the latest version. When replacing the management modules, you must enter the serial # of the old management module into the new management module. ‘The connectors on the power management module are covered when the product ships. When you install it, remove the protective covers and connect cables as necessary. ? Ef Rev. 5.41 5-19 HP BladeSysiem Administration Power Subsystem 3 aver Gperstone ciate, 7 tome etati) VG Pern Siatemee fren Parr sane 2 pcr) Baie me * Moragen nk LEDs Bohn tation Dyn Cope 2» Power management module LEDs The power management module has LEDs for identification, power status, and management activity. The location of the power management module switches, connectors, and LEDs on 1U and 3U enclosures is the same. Note: The power zone switch is not used with the 1U power enclosure. Important! 1f the power configuration switch is set improperly, all power management module management link connector LEDs flash. If management modules are cabled improperly, all management link connector LEDs flash on all management modules. Rey. 5.41 5-20 HP BladeSystem Administration Power Subsystem Power backplane * The power backplane supplies power to all the equipment in the enclosure + Power is connected to the -48V DC power coupler pins mounted on the power backplane * Power is distributed fo the A side components with the A side coupler and the B side components with the B side coupler Power backplane The power backplane supplies power to all the equipment in the enclosure. Power is connected to the -48V DC power coupler pins mounted on the power backplane. Power is distributed to the A side components with the A side coupler and the B side components with the B side coupler. Both electrical and mechanical fuses in the backplane power feed to each bay protect the power backplane from possible electrical damage. The power backplane and data backplane are serviceable without tools and without removing the blade servers and interconnects from the enclosure or removing the blade enclosure from the rack. Rev. 5.41 5.21 HP BladeSystem Administration Power Subsystem Rev. 5.41 5-22 HP BladeSystem Administration Power Subsystem Power distribution system * Two power distribution options ~ Scalable bus bar + Five server enclosures + Two power enclosures + 36U rack space ~ Mini bus bar « Three server enclosures + Two power enclosures (requires the Dual Power Input Kit option) + 24U rack space (a full 42U rack of blades requires two poirs of mini bus bars) Power distribution system The power distribution system carries the DC power from the power enclosures to the server enclosures. In the HP BladeSystem power subsystem, DC power is distributed from the power supplies in the power enclosures to the server enclosures through one of three power distribution options: * Scalable bus bar — Five server enclosures — Two power enclosures ~ 36U rack space + Mini bus bar — Three server enclosures — Two power enclosures (requires the Dual Power Input Kit option) — 24 rack space (a full 42U rack of blades requires two pairs of mini bus bars) * Bus box — One server enclosure — One power enclosure ~ 9 rack space Hinges attach the bus bars to rack rails to enable easy rear access to the blade server enclosure, network cables, and management modules. Two mini bus bar configurations can be stacked to fila standard 42U rack. This power distribution system offers several advantages, including: * Serviceability — Circuit breakers on the bus bars enable you to shut off the power to individual blade server enclosures for safe serviceability. * Flexibility — Power distribution options offer a variety of deployment sizes, from single enclosure test or evaluation configurations through fullrack blade deployments. * Cable reduction — An important advantage of blade servers is the reduced amount of cables necessary. In this example, one pair of mini bus bars consolidates the power for up to 48 servers into four output cables from the power enclosure. Rev. 5.417 5-23, HP BladeSystem Administration Power Subsystem Scalable bus bars * Supporis one or Wwo power enclosures * Supports up to five blade server enclosures * Allows for future growth and flexibility Scalable bus bars (One scalable bus bar supports one or two 3U power enclosures and up fo five 6U blade server enclosures and 12 power supplies. This solution enables future growth and flexibility in two ways * Customers can deploy this solution initially with less than the maximum supportable configuration, and then add Proliant BL pClass blade servers and blade server enclosures as their computing needs grow. * Customers can mount other devices (such as switching hardware) in the same rack above the 6U blade server enclosures. Rev. 5.41 5-24 HP BladeSystem Administration Power Subsystem Mini bus bars * Foch pair of mini bus bars supports ~ One power enclosure — Three blade server enclosures * A full 42U rack of Protiant BL p-Class blade servers requires — Two pairs of mini bus bars ~ Two power enclosures ~ A Dual Power Input Kit Mini bus bars The mini bus bars ship in pairs; each poir supports one 3U power enclosure and up to three 6U blade server enclosures. This solution offers flexibility because other devices can be mounted above it in a rack. By adding a Dual Power Input Kit plus a second power enclosure, you can install a second pair of mini bus bars above the lower set to fill a 42U rack. Note: Depending on types of servers installed, a 42U rack can hold up to six ProLiant BL p-Class blade server enclosures and two power enclosures using two mini-bus bars. Rey. 5.41 5-25 HP BladeSystem Administration Power Subsystem meta © 247 revo Dene onan Scalable and mini bus bar examples A ssample scalable bus bar configuration is shown in the left graphic; a configuration with two mini bus bars is shown on the right. Caution! Always use blanking panels to fill empty vertical spaces in the rack. This arrangement ensures proper airflow. Using a rack without blanking panels results in improper cooling that can lead to thermal damage. Rev. 5.41 5-26 HP BladeSystem Administration Power Subsystem Dual Power Input Kit «Two power enclosures can be connected to a pair of mini bus bars * The kit contains — Two duakpower boxes ~ Installation instructions Dual Power Input Kit installed | Dual-power boxes Dual Power Input Kit Deploying a full 42U rack of Proliant BL p-Class blade servers requires using Iwo poirs of mini bus bars. In addition, when using an enhanced blade server enclosure, you need two power enclosures for power redundancy. The dualpower boxes provided in the Dual Power Input Kit enable two power enclosures {instead of one) to be connected fo a pair of mini bus bars, providing additional power redundancy. The Dual Power Input Kit contains: + Two dualpower boxes * Installation instructions Rev. 5.41 5-27 HP BladeSystem Administration Power Subsystem Powe i Rev. 5.41 5-28 HP BladeSystem Administration Power Subsystem Server enclosure DC power distribution Server blade Enhanced server A enclosure B A bladeenciosure B Server enclosure DC power distribution To accommodate the higher power load, the enhanced server enclosure is split in half across the backplane between Side A and Side B, with three power supplies providing -48V DC to the Side A bus bar and three other power supplies feeding the ‘Side B bus bar. Holt the blades in a server enclosure are powered by side A (bays 2 - 5) and half by side B (bays 5 — 8). The interconnects, which reside in bays 1 and 10, and the enclosure management module continue to share power from both sides (A and B) of the power subsystem. This allows both interconnect modules as well as internal chassis communication to continue in the event of a power failure from a single AC input. Note: To get AC input redundancy, a second power enclosure will always be needed. See the following slide for full redundancy solution. Note: The upgraded enclosure is provided because of power loading problems for the 16 dual processor BL30p Servers. A fully loaded enclosure can have as many as 32 processors and 32 disk drives operating, plus two interconnect switches.All shipped enclosures (as of September 2004) will be the upgraded type enclosure. 5 7 Es a Rev. 5.417 5-29 HP BladeSystem Administration Power Subsystem Nonredundant power configurations Original Blade server Enhanced server enclosure blade enclosure Nonredundant power configurations In the original blade server enclosure, o single power enclosure with three power supplies installed in bays 1 through 3 powered the entire blade server enclosure. The doubledense Proliant BL30p blade servers drove the development of the enhanced blade server enclosure. In this enclosure, the same power supplies deliver power to blade servers in bays 2 through 5 only. Three additional power supplies, installed in power enclosure bays 4 through 6, are required to power blade servers in bays 6 through 9. The enhanced server enclosure powers both networking slots (® and ®) and the management module redundantly a all times. Rev. 5.41 5-30 HP BladeSystem Administration Power Subsystem Redundant power delivery Enhanced server Blade server blade enclosure enclosure magna naan Redundant power delivery - ‘All ProLiant BL p-Class blade server enclosures support redundant A and B power feeds. Mini bus bars ‘The enhanced server enclosure requires individual power to the A and B sides. As the diagram con the slide shows, one enhanced server enclosure requires one power enclosure for nonredundant power; you must install two power enclosures for redundancy. HP offers a Dual Power Input Kit to add a second power enclosure. A Power Enclosure Connectivity Kit is also available to upgrade a mini bus bar system to redundantly supply more thar. three enhanced blade server enclosures. Scalable bus bars By design, a scalable bus bar supports two power enclosures and dual A and dual B feeds. ‘Thus, it provides full redundancy without modification. However, two power enclosures are still required for redundant power. Rev. 5.41 5.37 HP BladeSystem Administration Power Subsystem Redundant power configurations “A2U.__ 308 edundan PIS and rendant AC ay, = | ==HI== f § : i i bude enclosure [Power enclosure Busbar Dual Power Input Kt ‘Assumes three-phase power, max blade configuration, ond GBE? interconnect switches Redundant power configurations The enhanced blade server enclosure does not allow 96 blade servers in one 42U cabinet with redundancy, The maximum 42U and 21U rack configurations are based on either physical constraints or maximum power loads. Certain limitations apply, given the following assumptions: + All blade servers in a rack are the same model + All models are configured with the maximum number of processors, memory, and two disk drives * All configurations use the GbE2 interconnect switches * All server blades are running at or near 100% average utilization rates using three-phase power The maximum number of blade servers per rack is limited because: * Dual Power Input Kits, which occupy 6U of rack space, are required in @ rack using mini bus bars. + When scalable bus bars are used, the two power enclosures cannot supply sufficient power for full redundancy at the maximum configuration. Note: Customers can build larger rack configurations if they do not require AC redundancy. Refer to the HP BladeSystem p-Class Sizing Utility to determine specific requirements. Rev. 5.41 5-32 HP BladeSystem Administration Power Subsystem Summary y The power subsystem in an HP BladeSystem features power supplies in a 3U or 1U power enclosure. Three-phase power is required to support the maximum number of double-dense blades in all contigurations. Single-phase power is typically sufficient for smaller configurations. Power is distributed using scalable or mini bus bars or bus boxes, depending on the number of servers and type of power enclosures and power supplies installed in the rack. You can configure the system for power redundancy vsing HP components, - The HP BladeSystem p-Class infrastructure offers two flexible power schemes to accommodate a range of requirements, from small offices and remote sites to the largest datacenters: + 1U power enclosure, supplying power to a single blade server enclosure + 3U power enclosure, providing rackentralized power 1U all-in-one power enclosure The 1U alkin-one power enclosure provides hotplug, redundant power for a single enclosure of blades from standard singlephase AC inputs, making it ideal for small office and remote site blade deployments. Rack-centralized power subsystem For mult-enclosure and multi-rack blade installations typically found in datacenters, HP provides an efficient rack-centralized power subsystem that provides redundant, scalable power to all blade servers in a rack. The rack-centralized power subsystem includes: * 3U power enclosures — Holds the hotplug power supplies + Hotplug power supplies — Convert single-phase or three-phase AC inputs to -48 volt DC power * Power distribution — Carries the DC power from the power enclosures to the blade server enclosures in the rack. Power distribution units (PDUs) PDUs are not required, but recommended. HP offers a variety of PDUs to meet every need. Uninterruptible Power Supplies (UPSs) HP offers two HP-branded UPSs for the BladeSystem — HP R55OOXR and HP R12000XR. HP also supports PowerWare 9305 UPS. Rev. 5.41 5.33 HP BladeSystem Administration Power Subsystem Rev. 5.41 5-34 HP BladeSystem c-Class Server Blades HP BladeSystem Administration ® ey ssej9-9 waysksopeia dH HP Blade Server B Module 6 he646s a.01 Rev. 6.21 6-1 HP BladeSystem Administration HP BladeSystem Class Server Blades Objectives * List the features and benefits of BladeSystem c-Class server blades + Identify the BladeSystem c-Class server blade models Objectives ‘After completing this module, you should be able to: * List the features and benefits of BladeSystem c-Class server blades * Identify the BladeSystem ¢-Class server blade models Rev. 6.21 - 6-2 HP BladeSystem Administration HP BladeSystem c-Class Server Blades (7 c-Class versus p-Class server blades ec et ee reel) 2 Twice the 1/0 expansion when compared to today's blk | a ee erie te saa ord eee Ce E Tesco Tin To Ae | Holiplug SAS Hard ceive suppont on every blade | Peer ne Class Cee oe it0 2 Standard Blade Edition included with c-Class versus p-Class server blades The HP BladeSystem portfolio evolved to address the changing needs of the enterprise computing environment. Compared to the p-Class server blades, the BladeSystem Class enables easier, faster, and less expensive deployment and operation with a more flexible infrastructure, meeting current and future IT requirements. Features of the c-Class include: * Support for a variety of processors — AMD Opteron — Intel Xeon — Intel Htanium * Equivalent fo Proliant 2P and 4P server designs * iLO Standard Blade Edition included with every blade; iLO 2 Advanced is supported Advantages of c-Class server blades over the p-Class include: * Twice the memoty capacity compared fo current Intel blades * Twice the I/O expansion compared to current blades * Support for the future switching technologies = 8Gb Fibre Channel — 10GbE = InfiniBand First to deliver HP ProLiant DL380-class features set in a blade form factor Hotplug SAS hard drive support on every blade Rev. 6.21 63 HP BladeSystem Administration HP BladeSystem c-Class Server Blades Features and benefits Features and benefits Features and benefits of the BladeSystem c-Class include: * Dual-core processors — Two Xeon or Opteron processors per server socket enable greater system scalability. When the next generation of software applications are developed to take advantage of dual- and multicore processor technology, customers will benefit from support for more users ond bigger applications. * Serial attached storage — As clock frequencies increased to keep pace with the bandwidth requirements of their attached storage devices, serial drive interfaces have replaced U320 buses. The pointto;point interface of Serial Attached SCSI (SAS) and Serial ATA (SATA) minimizes bottlenecks, enabling rapid data transfer between RAID controllers and drives. Integrated HP Smart Array RAID controllers with battery-backed write cache (BBWC) option — An integrated SAS RAID controller in each cClass server blade enables RAID levels 0, 1, and 5. Because the BladeSystem Class features a common array controller strategy with Protiont ML and Dl servers, customers do not need fo learn a new architecture. An optional battery-backed write cache enabler provides a means for storing and saving data in the event of an unexpected system shutdown. Note: RAID 5 is available on the Proliant BL480c server blade only. * Multifunction NICs — HP multifunction NICS provide @ high-performance network interface with support for TCP/IP Offload Engine (TOE), iSCSI, and Remote Direct Memory Access (RDMA) over a single network connection. Previously, the typical server environment required separate connectivity products for networking, storage, interconnects, and infrastructure management. HP multifunction NICs present a single connection supporting multiple functions, enabling you to manage an entire infrastructure as a single, unified fabric. These NICs provide high network performance with options for additional upgrades fo enhance memory and storage utilization. Rev. 6.21 6-4 HP BladeSystem Administration HP BladeSystem cClass Server Blades Dual-core processors Single core g w fs Pee eee ei cet * Dual-core Xeon processors feature Hyper-Threading Technology | + Duolcore Opteron processors feature Direct Connect Architecture Dual-core processor technology Dual-core technology is designed to make server processors perform more efficiently in multi- threaded environments. It provides an additional degree of processor design flexibility that offers increased performance while addressing power and timing issues, without being cost-prohibitive. A dual-core processor is a single physical package that contains two full-processor cores per socket. The two cores share the same functional execution units and cache hierarchy; however, the operating system recognizes each execution core as an independent processor. The performance improvement of a dual-core processor is in addition to the improvement that can be achieved with Hyper-Threading Technology. When enabled, Hyper-Threading technology makes a single processor look like two processors to the operating system. BladeSystem Class server blades that feature Intel dual-core processors with Hyper-Threading Technology can run two threads on each execution core, enabling these processors to run up to four threads simultaneously. The added capacity provided by the second execution core reduces competition for processor resources and enables greater processor utilization. like the Intel processors, dual-core AMD Opteron processors connect two CPUs on one die for reduced latencies, improving system efficiency and application performance for computers running multiple applications at the same time or compute-intensive multithreaded applications. AMD Direct Connect Architecture reduces the bottlenecks of CPU design by eliminating glue logic normally required for peripheral subsystems such as memory ond I/O. The processor die integrates the DDR memory controller and L1/L2 caches, reducing memory latency associated with separate-chip designs. The integrated memory controller also means that memory bandwidth scales with additional processors, reducing the need for large on-die caches. Direct Connect Architecture with HyperTransport technology provides balanced and expandable throughput for o variefy of I/O interconnects such as Fibre Channel, gigabit/10G Ethernet (when available), PCLX, and serial ATA. With a throughput of up to 8GB/s per link, the HyperTransport bus scales as processors are added, providing a peak throughput rate of 16GB/s per processor in a hwo-way system, Rev, 6.21 65 HP BladeSystem Administration HP BladeSystem c-Class Server Blades Serial storage technology * Features and benefits — Point to point topology + Reliable scalable throughput ~ Performance beyond parallel ATA and SCSI + Improved perfomance — 1.5Gb, 3.0Gb, 6.06%, and faster = Growth roadmap * Investment protection ~ Thin cables * Improved chassis airflow enables smaller form factors and consolidation ~ Smaller connectors + Enables 2.5" dualported drive consolidation Serial storage technology SAS integrates two established technologies, combining proven utility, reliability, and performance attributes of the SCSI protocol with the performance advantages of a serial architecture. SAS solutions accommodate both low-cost bulk storage (ATA) and performance and reliability in mission425 server ports, compared to HP switches handling 16 server ports each. The customer can get a Cisco experience without having to manage up to 26x the number of switches in the network. Cisco will offer stacking in a switch next year and Blade Network Technologies plans to offer stacking. Passthrough sales have not dropped as much as expected, probably because customers have chosen to run pass-through to their standalone Cisco 6500. switches. i 3 ie Rev. 6.21 8-39 HP BladeSystem cClass Ethernet Interconnect and HP BladeSystem Administration Mezzanine Options ; Key Virtual Connect components Ethernet Modules Fibro Channel Modules = * Virtual Connect Manager teeat 27 nthe Dont aes? “ Key Virtual Connect components like Ethernet and SAN switches, the Virtual Connect modules slide into the switch bays of the enclosure. The Ethernet module supporis eight 1Gb uplinks and two 10Gb uplinks. The 10Gb uplinks can be used for stacking to other Virtual Connect switches in neighboring enclosures to create a single networked domain for the server administrator. The Virtual Connect Manager runs on the Ethernet Module and supports both the Ethernet and Fibre Channel environments. Configuration of the virtual NICs and HBAs is supported through the Onboard Administrator. Rev. 6.21 8-40 HP BladeSystem ¢ Express Selup Eee eeeteien) Obtain an IP address for the fa0 interface through the Onboard Administrator For the switch module to obtain an IP address for the fa0 interface through the Onboard Administrator, these conditions must be met: * The BladeSystem Class enclosure is powered on and connected to the network. * Basic configuration of the Onboard Administrator is completed, and you have the username ‘and password for the Onboard Administrator. * ADHC? server is configured on the network segment on which the server blade resides, or the Onboard Administrator is configured to run as a DHCP server. * Install the switch module in the interconnect module bay, and after approximately two minutes, the switch module automatically obtains an IP address for its fa0 interface through the Onboard Administrator. * Alter you have installed the switch module, it powers on. As it powers on, the switch module begins the POST, a series of tests that runs automatically to ensure that the switch module functions properly. (continued on the next page} Rev. 6.31 12-27 Instolling and Configuring ¢Class Connectivity HP BladeSystem Administration Options Wait for the switch module to complete POST. it might take several minutes for the switch module to complete POST. Verify that POST has completed by confirming that the system and status LEDs remain green. If the switch module fails POST, the system LED turns amber. POST errors are usually fatal. Call Cisco Systems immediately if your switch module fails POST. Wait approximately two minutes for the switch module to get the software image from its flash memory and begin autoinstallation. Then follow these steps: 1 2 Rev. 6.31 Using a PC, access the Onboard Administrator in a browser window. Open the Interconnect Bay Summary window where you can find the assigned IP address of the switch module fa0 interface in the Management URL column. . Click the IP address hyperlink for the switch module from the Management URL column to open a new browser window. The Device Manager window for the switch module displays. (On the left side of the Device Manager GLI, click Configuration -> Express Setup. The Express Setup home page displays. 12-28 Installing and Configuring ¢Class Connectivity HP BladeSystem Administration Options Completing the Express Setup (network settings) * Enter this information in the Network Settings fields: = Enter a value in the Management Inierface (VLAN ID) field ‘+ The VLAN ID range is 1 42 1001 and the default is 1 ~ Enter the IP address ofthe switch module ~ Select cn iP subnet mask ~ Enler the IP address for the default gateway (router) in the Default Gateway field ~ Enter your password in the Switch Password field Completing the Express Setup (network settings) Enter this information in the Network Settings fields: + In the Management Interface (VLAN ID) field, the default is 1. Enter a new VLAN ID only if you want fo change the management interface through which you manage the switch module and fo which you assign IP information. The VLAN ID range is 1 to 1001. In the IP Address field, enter the IP address of the switch module. In the IP Subnet Mask field, click the drop-down arrow and select an IP subnet mask. In the Default Gateway field, enter the IP address for the default gateway (router). Enter your password in the Switch Password field. The password: = Can be from 1 to 25 alphanumeric characters - Can start with a number — Is case sensitive — Allows embedded spaces, but does not allow spaces at the beginning or end In the Confirm Switch Password field, enter your password again. Rev. 6.31 12-29 lnstalling and Configuring ¢Class Connectivity HP BladeSystem Administration Options Completing the Express Setup ( Completing the Express Setup (optional settings) + Host Name field — Enter a name for the switch module * System Contact field — Enter the name of the responsible person * System Location field — Enter the room, rack, and enclosure * Telnet Access field — Click Enable if you are going to use Telnet to manage the switch module by using the CLI — If you enable Telnet access, you must enter a Telnet password * SNMP field — Click Enable to enable SNMP = If you enable SNMP, you must enter a community string in the SNMP Read Community field, the SNMP Write Community field, or both * Click Submit to save your settings orcsodboane » (optional settings) | You can enter the Optional Settings information now or enter it later by using the device manager interface: Rev. 6.31 In the Host Name field, enter a name for the switch module. The host name is limited to 31 characters; embedded spaces are not allowed. In the System Contact field, enter the name of the person who is responsible for the switch module. In the System Location field, enter the room, enclosure, and rack where the switch module is located. In the Telnet Access field, click Enable if you are going to use Telnet to manage the switch module by using the CLI. If you enable Telnet access, you must enter a Telnet password. In the Telnet Password field, enter a password. The Telnet password can be from 1 to 25 alphanumeric characters, is case-sensitive, allows embedded spaces, but dees not allow spaces at the beginning or end. In the Confirm Telnet Password field, reenter the Telnet possword. In the SNMP field, click Enable to enable Simple Network Management Protocol (SNMP). Enable SNMP only if you plan to manage the switch modules by using CiscoWorks 2000 or another SNMP-based network-management system IF you enable SNMP, you must enter a community string in the SNMP Read Community field, the SNMP Write Community field, or both. SNMP community strings authenticate access fo MIB objects. Embedded spaces are not allowed in SNMP community strings. When you set the SNMP read community, you can access SNMP information, but you cannot modify it. When you set the SNMP write community, you can both access and modify SNMP information. {continued on the next page) 12-30 Installing and Configuring Class Connectivity HP BladeSystem Administration Options + Click Submit to save your setfings, or click Cancel fo clear your settings. — When you click Submit, the switch module is configured and exits Express Setup mode. The PC displays a warning message and then attempts to connect with the new switch module IP address. If you configured the switch module with an IP address that is in a different subnet from the PC, connectivity between the PC and the switch module is lost. * Disconnect the switch module from the PC and connect the switch module to your network. * After you complete Express Setup, you should refresh the PC IP address: ~ For a dynamically assigned IP address, disconnect the PC from the switch module, and reconnect the PC to the network. The network DHCP server assigns a new IP address to the PC, — Fora statically assigned IP address, change it to the previously configured IP address. Rev. 6.31 12-31 Installing and Configuring Class Connectivity HP BladeSystem Administration Options Bis ne Rev. 6.31 12-32 Instaling end Configuring Class Connectivity HP BladeSystem Administration jptions Installing a Pass-Thru Module 1. Open the handle 2. Insert the Pass Thru into the bay (1) 3. Close the handle (2) pret my Hakata ge ners 2 Installing a Pass-Thru Module To install a Pass-Thru Module, follow these steps: 1. Open the handle. 2. Insert the Pass-Thru into the bay (1). 3, Close the handle (2). Rev. 6.3) 19.33 Installing and Configuring ¢Class Connectivity HP BladeSystem Administration Options 7 Pass-Thru Module connectors + Ports on the rear panel of the Pass-Thru Modules map to server blades in the enclosure HP 1Gb Ethernet Pass-Thru Module, f Thdahtehie Ald lioioic al Sel a Pass-Thru Module connectors The ports on the rear panel of the HP 1Gb Ethernet PassThru Module map to the server blades in the enclosure. Refer to the HP BladeSystem Enclosure Setup and Installation Guide for more information on BladeSystem port mapping. Rev. 6.31 12-34 Installing and Configuring Class Connectivity HP BladeSystem Administration Options 8 s ka Rev. 6.317 12-35 Installing and Configuring ¢Class Connectivity HP BladeSystem Administration Options Rev. 6.31 12-36 Installing and Configuring Class Connectivity HP BladeSystem Administration Options Virtual C mvs Rev. 6.31 12-37 Installing and Configuring Class Connectivity HP BladeSystem Administration Options | | Rev. 6.31 12-38 HP BladeSystem p-Class Network Connectivity HP BladeSystem Administration Options, ‘Module 13 he646s 0.01 Rev. 5.41 13-1 HP BladeSystem p-Class Network Connectivity, Oy 1S Objectives * Explain the enclosure network and iLO signal routing * Describe and position the HP BladeSystem network interconnect options * Explain the technical aspects of HP BladeSystem patch panels * Explain the technical aspects of HP BladeSystem network switches * list and describe the HP BladeSystem multifunction network adapters Objectives Aiter completing this module, you should be able to: * Discuss general networking technology concepts, including: — layer 2 switching — Virtual LAN (VLAN) — Spanning Tree Protocol (STP) — Port trunking and load balancing — Teaming and port mirroring Explain the enclosure network and HP integrated Lights-Out (iLO) signal routing Describe the blade server signal routing Describe and position the HP BladeSystem network interconnect options Explain the technical aspects of HP BladeSystem patch panels Explain the technical aspects of HP BladeSystem network switches List and describe the HP BladeSystem multifunction network adapters Rev. 5.41 13-2 HP BladeSystem pClass Network Conn HP BladeSystem Administration oF Introduction * BladeSystem enclosures require an interconnect option to route network signals from the server bays * ProLiant BL p-Class servers use interconnect switches or patch panels * Advanced options allow VLANs that separate available network bandwidth info multiple independent and secure | channels | * Redundant features provide high availability at the frontend | Industry-standard switch technology for networks * Easy installation, configuration, and interoperability with existing network infrastructures and management Introduction Every HP BladeSystem server enclosure requires an interconnect option to route network signals from the server bays. ProLiant BL p-Class servers use either interconnect switches or patch panels to collect the NIC signals from the servers and send them out to the network. Advanced network interconnect options allow the creation of virtual LANs (VLANs) that separate available network. bandwidth into multiple independent and secure channels. Redundant features such as redundant NIC capability and redundant patch panels and network switches provide high availability at the front end. Industry-standard switch technology for networks Because BladeSystems are standards-based, they provide advantages such as easy installation, configuration, and interoperability with network infrastructures and management. The NICs are on the blade servers, but the network signals for each NIC are routed through the signal backplane to the interconnect options, which plug into the outside bays of the blade server enclosure, one on each side. The interconnect switches or patch panels provide pass- through of Ethernet network and storage signals to the external network infrastructure. 9 s Rey. 5.41 13-3 HP BladeSystem p-Class Network Connectivity HP BladeSystem Administration Options Rev. 5.41 13-4 HP BladeSystem p-Class Network Connectivity Options letwork an signal routing — Enhanced enclosure ssaio- Network and iLO signal routing — Enhanced enclosure In the enhanced blade server enclosure, each blade server bay has four NIC signals and one iLO signal. All iLO signals are routed to the dedicated iLO port on the rear of the enclosure (on the blade server enclosure management module). Such change supports an additional NIC in each blade server bay - each bay now has four NICs, of which two are routed to each interconnect bay. This enclosure also supports the haltheight blade servers (ProLiant BL30p and BL35p). Each half- height blade server has two NICs and one iLO, and connects to the corresponding enclosure signals. The blade server enclosure also contains two crosslink signals, connecting ports 17 and 18 of each interconnect bay. The patch panel and network switch configuration is the same as in the original blade server enclosure, with one exception: interconnect bay B no longer routes any iLO signals. Rev. 5.41 13-5 HP BladeSystem pClass Network Connectivity Options Ovemwiew of th nelwerk int Rev. 5.41 13-6 ity 10/100/1000 10/100/1000 10/100/1000 10/100/1000 Types of p-Class interconnects HP offers five ProLiant BL p-Class interconnect options to provide signal routing and redundancy for the entire enclosure. The two patch panels provide passthrough of network signals (RJ-45 Patch Panel) or network and storage signals (RJ-45 Patch Panel 2), giving you the flexil to choose the options you prefer. Alternatively, the interconnect switch options provide different levels of Ethernet switching functionality and Fibre Channel signal pass-through. The switch options are: * GbE? Interconnect Switch * Cisco Gigabit Ethernet Switch Module (CGESM) The CGESM was developed and manutactured by Cisco for the HP BladeSystem to provide Fibre Channel SAN pass-through support in a 100% Cisco switch. The CGESM is consistent with Cisco technology standards, including Cisco's latest management, Application Specific Integrated Circuit (ASIC), and architecture standards. The RI-45 Patch Panel, RI-45 Patch Panel 2, GbE2 Interconnect Switches, and the CGESM may be mixed within the rack, but not within the same blade server enclosure. The corresponding interconnect modules may also be mixed within the rack, but not within the same blade server enclosure. Note: iLO typically operates at 100Mb/s and is routed to the integrated iLO port on the management module. With the updated enclosures, iLO is routed from the blade servers to the integrated iLO port on the enclosure management module, not to the interconnect switches. The actual negotiated speed of any port is dependent upon the capability of the device to which it is attached. Rev. 5.41 13-7 HP BladeSystem p-Class Network Connectivi OQ RJ-45 Patch Panels * Enable Ethernet LAN signals to pass through to third-party LAN devices + Provide all network data connections for ‘one blade server enclosure * RI-45 Patch Panel 2 provides eight Fibre Channel front panel ports per patch panel Fibre Channel passthrough RJ-45 Patch Panels The RJ-45 Patch Panel ond R45 Patch Panel 2 enable Ethernet LAN signals to pass through to third-party LAN devices, giving you flexibility in choosing a network switch, hub, or router. ‘One pair of Rh45 patch panels provides all network data connections for one blade server enclosure, Each patch panel gathers the NIC connections from all installed blade servers that are routed to Side A or Side B of the server enclosure. Both the RJ-45 Patch Panel and RI-45 Patch Panel 2 pass all 32 Ethernet signals as separate RJ- 5 connections through two rear-mounted LAN interconnect modules per patch panel. * Ten-connector module * Sixconnector module Both patch panels support a combination of Proliant BL p-Class servers. Only one NIC ata time may be enabled for Preboot eXecution Environment (PXE). A NIC on each server is pre-selected as the default PXE NIC. This results in all the PXE-enabled NICs being routed to the same interconnect. However, you can use the ROM-Based Setup Utility (RBSU) to designate any NIC fo be the default PXE NIC. Thus, system availability can be enhanced by selecting PXE-enabled NICs that are routed to different interconnect blades. In addition to Ethernet signal pass-though, the RI-45 Patch Panel 2 provides eight Fibre Channel front panel ports to support signal passthrough for up to eight blade servers with two Fibre Channel ports each. Note: The Proliant BL40p blade server does not require Fibre Channel signals to be routed to the interconnect bays. The HP RI-45 Patch Panel and RI-45 Patch Panel 2 Kits each contain two patch panel interconnects. Important! When using the upgraded enhanced backplane enclosure, four NICs are provided for data. iLO data from all blades flows through the iLO port on the rear of the enclosure, Rev. 5.41 13-8 HP BladeSystem pClass Network Connectivity Options ProLiant pClass GbE2 — | Interconnect Switch ProLiant p-Class GbE2 Interconnect Switch ‘The Proliant BL p-Class enclosure provides eight blade server bays, each supporting up to four Ethemet NICs. Thus, a fully configured enclosure can have up fo 32 Ethernet cables, and a fully configured 42U rack can have up to 192 Ethernet cables. The Proliant GbE2 Interconnect Switch addresses the need for network cable reduction. Each switch consolidates the NIC signals to 1000 Mb/s using the two Ethernet switches in the enclosure. Note:the GbE2 switch is manufactured by Nortel. Rev, 5.41 13-9 HP BladeSystem pClass Network Connectivity HP BladeSystem Administration Options * 24-port gigabit Ethernet switch + SFP and RI-45 uplinks for fiber and copper flexibility + Enhanced Cisco IOS images + Proven Catalyst 2970 Layer 2+ design * SAN pass-through Cisco Gigabit Ethernet Switch Module A cobranded Cisco and HP produc, the Cisco Gigabit Ethernet Switch Module (CGESM) was developed and manufactured by Cisco for the HP BladeSystem to provide Hlexible Fibre Channel SAN pass-through support and improved management efficiency. The CGESM is a 24-port gigabit Ethernet switch featuring SFP and RI-45 uplinks for fiber and copper flexibility. HP BladeSystem enclosures with integrated Cisco switches reduce the number of cables required to connect servers to the network, reducing networking costs per port and the costs of cabling and equipment. Additional features include: * Enhanced Cisco IOS images * Proven Catalyst 2970 Layer 2+ design CGESM IOS CL support provides a common user interface and command set with all Cisco routers and Cisco Catalyst desktop switches. Benefits * Internal switch-to-switch cross connects are built in redundancy and flexibility = Disabled by default—no configuration surprises + Fiber and copper Ethernet options — Mix and match uplink ports at rear + Front panel Cisco LED and serial port management + Front panel porls for maintenance and network connections Note: The Cisco Gigabit Ethernet Switch Module ships as a single unit and should be ordered in quantities of two. The base unit consists of 16 GbE downlink ports to servers, two GbE cross- connect ports that are internal to the enclosure, two front panel GbE ports, and four SFP ports in the rear panel. Rev. 5.41 13-10 HP BladeSystem pClass Network Connectivity HP BladeSystem Administration Options CGESM service and support * Support provided by HP ~ HP providing first and secondJevel Customer Support — Cisco provides Level 3 support to the HP service and support ‘organizations. ~ Cisco is partnering with HP for product training = 3.3.3 Product Worranty + Firmware downloads will be available from HP only = Cisco Enhanced IOS ~ Cisco Enhanced IOS Crypto Version + 10S versions will follow Cisco releases twice yearly on average | CGESM service and support Support provided from HP includes: * HP is providing first: and second-level customer support * Cisco provides Level 3 support fo the HP service and support organizations * Three years parts, three years labor, and three years on-site Product Warranty Cisco is parinering with HP for product trai g. Teaining will be integrated into a formal Advanced Blade Training Module, which offered by the ISS Training organizaiion on a regular basis, Firmware downloads will be available from HP only: * Cisco Enhanced IOS * Cisco Enhanced IOS Crypto Version 10S versions will follow Cisco releases twice yearly on average. Rev. 5.41 13-11 HP BladeSystem pClass Network Connectivity HP BladeSystem Administration Options BladeSystem switch generational comparison oy ncn 22er ahtodtmigret Cron. BladeSystem switch generational comparison Basic similarities among the switches include: * 16 down ports * Layer 2 switches + Copper or fiber media Similarities between the GbE2 and the CGESM include: + All ports are Gigabit in both switches + Both offer six Gigabit uplinks + Both support SAN In addition, both the GbE2 and the CGESM switches support network time protocol, which enables the switch to send a requesi to a primary or secondary NIP server in each polling jperiod asking for Greenwich Mean Time (GMT). Current date and time information can be set manually or can be obtained through network time protocol. The current date and time displays ‘on the management interfaces and is used to record the date and time of switch events. Important differences between the GbE2 and CGESM switches include: * GbE? is IEEE standards-based; CGESM is 100% compatible with Cisco systems * GbE2 uses Broadcom ASICs; CGESM uses Cisco ASICs for better connection at speeds less than 1Gb * GbE? has hotplug network cable release; CGESM cube is built into switch ~ To remove the CGESM, unclip the SFP modules from the rear of the enclosure — When you remove the GbE2 switch, all cables are automatically released and held in the cube until the switch is re-inserted * GbE2 Switch default = BOOTP; GbE1 Switch default = DHCP (Windows must enable BOOTP in DHCP settings) The GbE? Interconnect Switch is priced lower than the CGESM. Rev. 5.41 13-12 HP BladeSystem p-Class Network Connectivity tions GbE2 modules Quad T2 Fiber ic Pe SAN modules Quadsx GbE2 connectors Fibér (GB) LAN modules Copper GbE2 modules GbE2 modules or cubes for the SAN attach to the upper connectors of the GbE2 Interconnect Switch. Cubes for the LAN attach to the lower connectors of the GbE2 Interconnect Switch. The GbE2 modules are: * Quad SX * Quad T2 * Fiber LC Rev. 5.41 13-13 HP BladeSystem p-Class Network Connectivity HP BladeSystem Administration Options Gigabit Ethernet switch best practices * Factory default configuration settings — User Accounts ~ Default VLANs ~ Remote Management IP interfaces * Access GbE switch using serial management port * GbE2 switch can be configured with up fo 256 IP interfaces Gigabit Ethernet switch best practices Consider the following best practices when implementing the GbE or GbE2 Interconnect Switch: * Before you configure the interconnect switches, ensure that you understand the factory default configuration settings for user accounts, default VLANs, and remote management IP interfaces on the switches — The interconnect switch does not have any initial user names or passwords set. HP recommends that after logging on, you create at least one rootlevel user as the switch administrator. = The interconnect switch ships with a default configuration with all ports (of both Switch A and Switch B) enabled and assigned the same VLAN. By default this default VLAN has @ VLAN ID equal to 1, is mapped to Spanning Tree Group 1, and has STP enabled. — Each switch module must be assigned its own IP address, which is used for communication with an SNMP network manager or other TCP/IP application (for example, web or TFTP). The factory default is set for the switch module to automatically obtain the IP address using the DHCP service from a DHCP server on the attached network, You can also manually change the default switch IP address to meet the specification of your networking address scheme. — The GbE? Interconnect Switch can be configured with up to 256 IP interfaces. Each IP interface represents the GbE2 Interconnect Switch on an IP subnet on your network. The IP Interface option is disabled by default. (continued on the next page) Rev. 5.41 13-14 Rev. 5.41 HP BladeSystem pClass Network Connectivity To enhance ProLiant BL pClass GbE2 Interconnect Switch management and user accountability, three levels or classes of user access have been implemented on the interconnect switch: Root, User+, and User. Some menu selections available to users with Root privileges may not be available to those with User+ and User privileges. The following table summarizes user access rights. Privilege Root Users ‘User | Configuration es Readonly Read-only | Network Monitoring es ~~ Read-only Readonly ‘Community Sirings and Trap Ves Read-only Read-only (Stations Update Firmware an (Configuration Files ‘Sys Factory Reset ~~ (es No ilies Ping-only Reboot Switc! You can access the GbE Interconnect Switch using the serial (DB-9) management port. E 13-15 HP BladeSystem p-Class Network Connectivity Options Rev. 5.41 13-16 HP BladeSystem p-Class Network Connectivity HP BladeSystem Administration Options HP Dual NC370i Multifunction network | adapter * Supports accelerated iSCSI * Roles include = iSCSI device — Combines SCSI and Ethernet technologies = TOE NIC ~ Shifis processing of the communications protocol | stack from the server processor to the NIC ~ RDMA device — Moves data from the memory of one computer direcily into the memory of another with minimal overhead NCTE Foy GS see HP Dual NC370i Multifunction network adapter ‘The HP Dual NC370i Multifunction network adapters for BladeSystems are cost-effective networking adapters to support accelerated iSCSI. The NC370i will also support TCP/IP Offload Engine {TOE} and Remote Direct Memory Access (RDMA) as those technologies become available. Customers deploying the NC370i can expect improved network communications, lowered blade server processor utilization, and a simplified IT infrastructure. Because the NC370i is a single card running over a single cable supporting multiple functions, it can assume any defined networking connection role. These include functioning as an iSCSI device, a TOE NIC, and as an RDMA device. * Simplifying the IT infrastructure — As an iSCS! HBA, the NC370i combines SCSI and Ethernet technologies to give ProLiant servers ready access to storage solutions over the same wire used for networking. This flexibility leads to a simplified infrastructure and reduced costs. When the NC370i is deployed as an iSCSI itor, it Functions as an HBA with accelerated performance for access to individual storage devices and SANs. Performance networking — As a TOE device, the NC370i shilts processing of the communications protocol stack (TCP/IP) from the server processor to the NIC, thereby freeing CPU cycles for other work. With TOE, the NC370i reduces the CPU cycles required to process incoming network traffic by 50% in Windows environments. + RDMA — As ahighspeed, low-latency network adapter, the NC370i provides the fastest communication between wo RDMA-capable systems by moving data from the memory of ‘one computer directly into the memory of anether with minimal CPU and memory overhead. The RDMA protocol is supported as an add-on capability to the NC370i for direct memory to memory transfer of data. Applications currently constrained by memory, processor, or networking throughput will gain the most from RDMA-enabled NC370i because it makes application scaling a reality. Clustered servers will benefit from the optimized performance ‘of ROMA when the NC370i is used as a cluster interconnect. Rev. 5.41 13-17 HP BladeSystem p-Class Network Connectivity ons Summary * General networking contepts — Layer 2 switching, VLANs, and STP — Port trunking ond load balancing = Teaming and port mirroring * Enclosure signol routing = 16 network signals are routed to each network interconnect bay ~ iLO routed to side B (original enclosure) or to enclosure management module (enhanced enclosure) | * Use Ethernet Connectivity Mapper for blade server NIC and iLO routing * Patch panels ~ Patch Panel and Patch Panel 2 * Ethernet switches - GbE, GbE2, and CGESM * Dual NC370i Multifunction network adapter Summary To successfully integrate the HP BladeSystem into an existing network infrastructure, you must recognize the potential pitfalls of doing so and collaborate with the network administrator to avoid any issues. In particular, you must demonstrate foundation knowledge in these general networking areas: * Layer 2 switching, VLANs, and STP * Port trunking and load balancing * Teaming and port mirroring Both the original and enhanced blade server enclosures provide signal routing for all blade servers with its signal backplane. The enclosure routes 16 network signals to each network interconnect bay (for a total of 32 network signals). Two additional signals interconnect both bays ~ these are used as crosslinks by network switches, buf are not used by patch panels. The original enclosure routes the iLO signal to the side B interconnect bay; the enhanced enclosure routes the iLO signal to the dedicated iLO port of the blade server enclosure management module. Each blade server NIC and iLO is routed to a specific port, This port assignment varies with different blade servers. The HP Ethernet Connectivity Mapper utility maps the general-purpose NICs and iO of the blade servers to their corresponding interconnect bay ports. HP offers two classes of network interconnects: * Patch panels - provide passthrough routing of the blade server NICs to external ports on the patch panel ~ Patch Panel - 10/100/1000 Mbps speeds, no Fibre Channel support — Patch Panel 2- 10/100/100 Mbps speeds, passthrough Fibre Channel support + Ethernet switches - provide Ethernet network switch capabilities such as VLANs, port trunking, and Spanning Tree Protocol. = GbE (Dlink) - 10/100 Mbps speeds, no Fibre Channel support = GbE2 (Nortel) - 10/100/1000 Mbps speeds, passthrough Fibre Channel support — CGESM (Cisco) - 10/100/1000 Mbps speeds, passthrough Fibre Channel support The Dual NC370i Multifunction network adapter enables accelerated iSCSI support for ProLiant BL20p G3, BL25p, BL35p, and BL45p blade servers, Rev. 5.41 13-18 HP BladeSysiem pClass Network Connectivity Options a ce 20 Pad es ies 2g ee Rev. 5.41 13-19 HP BladeSystem p-Closs Network Connectivity HP BladeSystem Administration Options Rev. 5.41 13-20 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration ‘Module 14 he646s 0.01 Rev. 6.21 141 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Objectives * Configure a Brocade 4Gb SAN switch * Install a Brocade 4Gb SAN switch Objectives When you have completed this module, you should be able to: * Configure a Brocade 4Gb SAN switch * Install a Brocade 4Gb SAN switch Rey. 6.21 14-2 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Configuring the Brocade 4Gb SAN Switch * Brocade Switch Explorer GUI and the Fabric OS CLI enable you to: ~ Access Fabric OS features, based on license keys Configure, monitor, dynamically provision, and manage every aspect of the SAN jure and manage the StorageWorks fabric on multiple, efficient levels ~ Identify, isolate, and manage SAN events across every switch in the fabric ~ Manage switch licenses ~ Perform fabric stamping Configuring the Brocade 4Gb SAN Switch The Brocade Switch Explorer GUI and the Fabric OS CLI offer complete fabric management tool for HP StorageWorks SANs that enables you to: * Access the full range of Fabric OS features, based on license keys * Configure, monitor, dynamically provision, and manage every aspect of the SAN * Configure and manage the HP StorageWorks fabric on multiple, efficient levels * Identify, isolate, and manage SAN events across every switch in the fabric * Manage switch licenses Perform fabric stamping To manage a switch using Telnet, Simple Network Management Protocol (SNMP), and Advanced Web Tools, the switch must be connected to a network through the switch Ethernet port (out of band) or from the Fibre Channel (in band). The switch must be configured with an IP address to allow for the network connection. You can access switches from different connections, such as Advanced Web Tools, CLI, and API. When these connections are simultaneous, changes from one connection might not be updated to the other, and some modifications might be lost. When simultaneous connections are used, make sure that you do not overwrite the work of another connection. In.a mixed fabric containing switches running various Fabric OS versions, you should use the latest-model switches running the most recent release for the primary management tasks. The principal management access should be set to the core switches in the fabric. For example, to run Secure Fabric OS, use the latest model switch as the primary Fabric Configuration Server (FCS), the location to perform zoning tasks, and the time server. A number of management tasks are designed to make fabriclevel changes; for example, zoning commands make changes that affect the entire fabric. When executing fabriclevel configuration tasks, allow time for the changes to propagate across the fabric before executing any subsequent tasks. For a large fabric, it might be take a few minutes. Rev, 6.21 14.3 SAN Connectivity Options Installation and HP BladeSystem Adi ltems required for configuring the switch * Switch installed in an enclosure +P address, subnet mask, and gateway address * Ethernet cable * SFP transceivers and compatible optical cables + Access to an FIP server (optional) = Used fo back up the switch configuration ltems required for configuring the switch To install the switch, you will need: ‘+ Switch installed in an enclosure * IP-address, subnet mask, and gateway address * Ethernet cable + SFP transceivers and compatible optical cables * Access to an FIP server (optional) = Used to back up the switch configuration Rev. 6.21 14-4 Configuration SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Planning the switch installation + The switch is @ hotpluggable device * Before installing the switch in the enclosure, you must review the following - Which interconnect bay pair should receive the switch. This is determined by Fibre Channel mezzanine card location and enclowue por mae Al Fibre Channel mezzanine cards connecting tothe switch mus bein the some server mezzanine position Important: Populate ll interconnect boys wih a switch, a possthrough module, 1 Wem rR cision aerine tedaraa sa ano {Important Populate all exlecnal switch pore wih ether an SFP or SFP dust plug, * Keep optical port dust plugs inthe SFPs until en optical cable is connected. This ‘ction improves cooling and prevents contominclion, reves sr entnpme Cen 2 5 Planning the switch installation the switch is a hotpluggable device, so the enclosure can be either poweredion or off during switch installation. Before installing the switch in the enclosure, you must review the following: * Which interconnect bay pair should receive the switch * The Fibre Channel mezzanine card location and enclosure port mapping = All Fibre Channel mezzanine cards connecting fo the switch must be in the same server mezzanine position. Important: Populate all interconnect bays with a switch, a passthrough module, or one of the blanks provided with the enclosure. Populate and set up only one Fibre Channel switch at a time to avoid IP address conflict with duplicate default addresses. Important: Populate all external switch ports with either an SFP or SFP dust plug. Keep optical port dust plugs in the SFPs until an optical cable is connected. This action improves cooling and prevents contamination. ey Hy Rev. 6.21 14-5 SAN Connectivity Options Installation ond HP BladeSystem Administration Configuration Installing a Brocade 4Gb SAN Switch ca ‘opviole elecrostalic ds techniques when handling the switch. Hondle in closed position he Brocade SAN swich one switch a aime naling {wo switches othe some lime will suk in an IP eddressing confit thal cannot be Caution! Use appropriate electrostatic discharge (ESD) techniques when handling the switch. Important! Before installing the Brocade 4Gb SAN Switch, make a record of the Media Access Control (MAC) address (printed on the MAC address label attached to the interconnect module) You might need this address when you configure the switch. Caution: Do not cable the switch until after configuration. Important! Make sure that the server NIC configuration matches the switch bay selected. Note: There are two switch interconnect ports between adjacent bays. These ports (17 and 18) ‘are disabled by default and must be manually enabled. To install the switch, follow these steps: 1. Move the handle latch to the right to release the installation handle. 2. Insert the switch into the enclosure I/O bay and push it firmly into place. 3. Press the installation handle into the latch to lock the switch in place. Important! Populate all enclosure I/O bays with a switch, a Pass-Thru Module, or one of the blanks provided with the enclosure. Populate all external switch ports with either an SFP or SFP dust plug. Keep optical port dust plugs in the SFPs until an optical cable is connected. This action improves cooling and prevents contamination. See the HP BladeSystem Enclosure Setup and Installation Guide for more information on the association between the mezzanine bay and the interconnect bays. Where you install the mezzanine card determines where you need to install the interconnect modules. Rev. 6.21 14-6 SAN Connectivity Options Installation ond HP BladeSystem Administration Configuration A] Verifying the installation i * Onboard Administrator verifies that switch type matches mezzanine cards in servers ~ If devices match, Onboard Administrator permits the switch to power up ~ If devices do not match, Onboard Administrator does not allow the switch to power up — If switch does not power up, check the enclosure and switch status in the Onboard Administrator GUL * Verify the LED indications - UID - Off Health ID - Steady green light = Module Status - Steady green light Verifying the installation When the switch is installed, Onboard Adr mezzanine cards in the servers. istrator verifies that the switch type matches the * Ifthe devices match, Onboard Administrator permis the switch to power up. * If the devices do not match, Onboard Administrator does not allow the switch to power up. * If switch does not power up, check the enclosure and switch status in the Onboard Administrator GUI. Verify that the following LED indications are present: * UID-Of + Health ID — Steady green light + Module Status - Steady green light Rev. 6.21 14-7 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Connect to the Onboard Administrator Establishing an Ethernet connection * Identify the active Onboard Administrator module * Locate the Ethernet port on the Onboard Administrator module * Connect the Ethernet cable to the Onboard Administrator module and the workstation/server or to the network containing the workstation — Workstation/server must have a terminal application * HyperTerminal in o Windows environment + TERM in a UNIK environment = Null modem serial cable (for connection to OA module) 1: beanatilah Sancti ecb bor car Sealine Connecting to the Onboard Administrator To establish a connection with the switch through the Onboard Administrator, the workslation/server must have a terminal application installed, and a null modem serial cable {for connection to OA module). * HyperTerminal in a Windows environment + TERM in a UNIX environment Establishing an Ethernet connection 1. Identify the active Onboard Administrator module. 2. Locate the Ethernet port on the Onboard Administrator module. 3. Connect the Ethernet cable to the Onboard Administrator module and the workstation/server or to the network containing the workstation. Note: The Onboard Administrator interconnect cannot support multiple simultaneous remote console sessions. So, you will not be able to open a session if another user is already connected. Important: Verify the switch is not being modified from any other connections during the remaining steps. Rev. 6.21 14-8 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Telnet to the Onboard Administrator * Connect the serial cable to the workstation/server and active Onboard Administrator module * Configure terminal application settings — Windows environment * Bits per second - 9600 * Databits ~ 8 * Parity - None * Stop bits - 1 + Flow control ~ None = UNIX environment, enter: *tip /dev/ttyb -9600 * Open a Telnet connection using the IP address of the Onboard Administrator Telnet to the Onboard Administrator 1. Connect the serial cable to the workstation/server and active Onboard Administrator module. 2. Configure terminal application settings as follows: Windows environment — Bits per second ~ 9600 = Databits - 8 = Parity ~ None = Stop bits - 1 — Flow control - None UNIX environment, enter: — tip /dev/ttyb -9600 3. Open a Telnet connection using the IP address of the Onboard Administrator (the login prompt displays when the connection locates the switch in the network). Rev. 6.21 14.9 SAN Connectivity Options Installation ane HP BladeSystem Administration Configuration Accessing the Brocade 4Gb SAN Switch (I of 2) 1. Log on to Onboard Administrator + User: administrator + Password: password 2. Identify the bay number for the installed switch 3. At the command line, enter: * Connect interconnect x where x is the interconnect bay number of the switch * User: admin Password: password Accessing the Brocade 4Gb SAN Switch 1. Log on to Onboard Administrator. User: administrator Password: password 2. Identify the bay number for the installed switch. 3. At the command line, enter: Connect interconnect x where x is the interconnect bay number of the switch User: admin Password: password Note: Some commands are case sensitive, so they should be entered as shown. Rev. 6.21 14-10 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration g the Brocade 4Gb SAN Switch (2 Accessin: of 2) * For securly, use the password features of the switch * At connect, you will see Connecting £0 integrated avitch 2 at 960,493 Becape chazacter is '_' Press [Ester] to display the avitch console: command: D)isconnect, c)hange settings, R)eboct switch, B)xit ‘command mode > C change settings for: LJocal Session, R)enote Port (I/O Bay 31, B)xit R settings: Blaudrate; flow control: None Wlerdare S)oftware; B)xit > B ‘Baud: A)1200 B) 2400 C)4800 D) 9600 F}15200 G) 38400 H) 57600 1)325200; )xie > * Choose the outgoing baudrate and flow control and exit Accessing the Brocade 4Gb SAN Switch (I of 2) For security, use the password features of the integrated switch. At connect, you will see: Connecting to integrated switch 3 at 9600,Ne1... Escape character is '_' Press [Enter] to display the switch console Command: D)isconnect, C)hange settings, R)eboot Switch, E)xit command mode >c Change settings for: L)ocal Session, R)emote Port [I/O Bay 3], E)xit R Settings: B)audrate; flow control: N)one H)ardwaxe S)oftware; E)xit > B Baud: A)1200 B)2400 C)4800 D) 9600 F)19200 G)38400 H)57600 1)115200; B)xit > Choose the outgoing baudrate and flow control and exit. Note: This possthru connection to the integrated I/O console is provided for convenience and does not supply additional access control. Rev. 6.21 14-11 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Setting the IP address 1. At the command line, enter: ipaddrset 2. Enter the remaining IP addressing information 3. Oplionally, verify the IP address by entering ipaddrshow at the command prompt 4. Record the IP addressing information 5. Use the Cl to configure the switch or access the Switch Explorer to complete the configuration. “Geuiiont Bo not connect the switch fo the SAN network unil the IP ddress f= A Saye Setting the IP address 1 2. 3. Rev. 6.21 At the command line, enter: paddrset. When prompted, enter the remaining IP addressing information. Optionally, verify that the IP address is correct by entering ipaddrshow at the command prompt. Record the IP addressing information and store it in safe place. . At this point, you can continue using the Cl! to configure the switch or exit from the Telnet session and access the Switch Explorer to complete the configuration. 14-12 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Changing the passwords + If passwords have not changed from defaul value, you are prompted to change thern + Ener new system passwords or + Press Ctrl+C to bypass the password prompls * Verify that the login was successful | Changing the passwords If passwords have not changed from default value, you are prompted to change them. Perform one of the following: ~ Enter new system passwords. ~ Press Cirl¥€ to skip the possword prompls. Verify that the login was successful (successful login displays switch name ond user ID to which you are connected). Note: Up to two simultaneous adi istrative sessions and four user sessions can be created. Rev. 6.21 14-13 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Setting the time * Date and time are used for logging events * Switch operation not dependent on date and time setting = Switch with incorrect time still operates properly + Use the following procedure to set the date and time . Connect to the switch and log in as admin. | 2. Enter the CUI date command using the following syntax date mnddHEMMyy * To switch time zones, refer to the tsTimeZone command in the HP StorageWorks Fabric OS 5.x command reference Setting the time Use the “mmddHHMMyy” syntax, where: mm is the month; valid values are 01 through 12 dd is the date; valid values are 01 through 31 HH is the hour using the 24-hour clock; valid values are 00 through 23 MM is minutes; valid values are 00 through 59 yy is the year; valid values are 00 through 99 — Values greater than 69 are interpreted as 1970-1999 — Values less than 70 are interpreted as 2000-2069 For example, the Cll date command: date 0227123003 would be February 27 12:30:00 UTC 2003 Links to the HP StorageWorks Fabric OS 5.x command reference guide and other Fabric OS 5.x documents can be found at: http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.|sp?contentType=Suppo HManual&lang=en&cc=us&docindexid=17911 1 &taskid=101 &prodtypeld=12169&prodSeriesid =A71548 Rev. 6.21 14-14 SAN Connectivity Options Installation and. HP BladeSystem Administration Configuration - Changing the Fibre Channel domain ID 1. Disable the switch by entering: switchDisable 2. Enter Configure, then enter a new value or press Enter to accept default At the fabric parameters prompt, enter ¥ and press Enter Enter a unique domain ID (1 through 239) 5. Complete the remaining prompts or press Ctrl+D to accept the remaining default values Bw o Re-enable the switch using the command: switchEnable Confirm changes by entering: fabricShow 8. Optionally, verify switch policy settings and specify any custom status policies that need to change 9. To deactivate the alarm for a particular condition, enter 0 at the prompt for that condition N Changing the Fibre Channel domain ID To modify the Fibre Channel domain ID, follow these steps: 1. Disable the switch by entering: switchDisable . Enter Configure; then enter a new value or press Enter to accept the default value. At the fabric parameters prompt, enter ¥ and press Enter. Enter a unique domain ID (1 through 239). Complete the remaining prompts or press Ctrl+D to accept the remaining default values. Reenable the switch using the command: switchEnable Confirm changes made to the domain ID by entering: fabricShow |. Optionally, verify switch policy settings and specify any custom status policies that need to change. - Enler switchstatusPolicyshow to verify the current policy settings. — If desired, change policy settings by entering switchstatusPolicySet at the prompt. Refer to the HP StorageWorks Fabric OS 5.x command reference guide for available parameters. — Customize the status policies, as required. PNOVRON 9. To deactivate the alarm for a particular condition, enter O at the prompt for that condition, Rey, 6.21 14-15 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Disabling/enabling a switch * By default, a switch is enabled after POST * Switch can be disabled or enabled, as required To disable 1. Connect to switch and log in as admin 2. Enter the command: switchDisable 3. All Fibre Channel poris on the switch are taken offline | * If the switch was part of a fabric, the fabric reconfigures To enable 1. Connect fo the switch and log in as admin | 2. Enter the command: switchEnable + All Fibre Channel pots that pass POST are enabled * Ifthe switch has ISLs to a fabric, it joins the fabric Disabling/enabling a switch By default, a switch is enabled following completion of POST. The switch can be disabled or enabled, as required. To disable the switch: 1. Connect to switch and log in as admin. 2. Enter the command: switchDisable 3. All Fibre Channel ports on the switch are taken offline. If the switch was part of a fabric, the fabric reconfigures. To enable the switch: 1. Connect fo the switch and log in as admin. 2. Enter the command: switchEnable 3. All Fibre Channel ports that pass POST are enabled. If the switch has Inter-Switch Links (ISls) to a fabric, it joins the fabric. Rev. 6.21 14-16 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Enabling/disabling a port To enable 1. Connect to the switch and log in as admin 2. Enter: portEnable To disable 1. Connect fo the switch and log in as admin 2. Enter: portDisable Enabling/disabling a port To enable a port: 1. Connect to the switch and log in as admin 2. Enter the command: portEnable , where portnumber is the aumber of the port to be enabled. To disable a port: 1. Connect to the switch and log in as admin. 2. Enter the command: portDisable , where portnumber is the number of the port to be disabled. ioe Eg) re 5 Rev. 6.21 14-17 SAN Connectivity Options Installation end HP BladeSystem Administration Configuration Verifying the configuration 1. Check LEDs to verify that all components are functional 2. Enter switchShow to get information about switch and port status 3. Enter £abricShow to get information about the fabric | Verifying the configuration Verify the configuration by performing the following steps: 1. Check LEDs to verify that all components are functional. 2. Enter switchshow to get information about switch and port status. 3. Enter £abricShow to get information about the fabric. Rev. 6.21 14-18 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Backing up the configuration * HP recommends regular backups = Ensures that a recent configuration is available for download to replacement switches * Back up the configuration to an FTP server by entering conf igUpload and following the prompts | Backing up the configuration HP recommends regular backups. This ensures that a recent configuration is available for download to replacement switches. Back up the corfiguration to an FIP server by entering configUpload and following the prompls. Rev. 6.21 14-19 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Rev. 6.21 14-20 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Documentation (1 of 2) * HP StorageWorks Brocade 4Gb SAN Switch for HP c-Class BladeSystem user guide, June 2006, PN AARWEBA-TE * Brocade 4Gb SAN Switch for HP ¢-Class BladeSystem installation instructions, June 2006, PN 5697-5678 * Brocade Fabric OS 5.x administrator guide Brocade Fabric OS 5.x Advanced Web Tools administrator guide * Pass-Thru installation instructions for HP c-Class BladeSystem, April 2006, PN 413280-021 Documentation (1 of 2) Ey eS 3. g Fj Rev. 6.21 14-21 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration a) Documentation (2 of 2) * Brocade OS 5.x command reference guide * Brocade Fabric OS 5.x diagnostics and system error i messages reference guide * Brocade Fabric OS 5.x Fabric Watch administrator guide * Brocade Fabric OS 5.x MIB reference guide * Brocade Secure Fabric OS administrator guide Documentation (2 of 2) Rev. 6.21 14-22 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Web sites * Links Typeld= 121698 * Allink to the Brocade 4Gb SAN Switch documents is found by going fo www.hp.com and clicking on Servers and clicking under Proliant on ¢Class BladeSystem * A link to the Brocade 4Gb SAN Switch can also be found by going to www.hp.com and clicking on Storage and clicking on SAN Infrastructure and clicking on Fibre Channel Switches and clicking under B-Series Fabric Entry Level on Brocade 4Gb SAN Switch for Class BladeSystem Web Sites Rev. 6.21 14-23 ei ra 9°. 3] s SAN Connectivity Options Installation and HP BladeSystem Administration nfiguration Learning check 1. list four things you will need to configure a Brocade 4Gb SAN Switch. 2. What should you record before installing a Brocade 4Gb SAN Switch? 3. What determines where you need to install the interconnect modules? Rev. 6.21 14-24 SAN Connectivity Options Installation ond HP BladeSystem Administration Configuration Rev. 6.21 14-25 SAN Connectivity Options Installation and HP BladeSystem Administration Configuration Rev. 6.21 14-26

You might also like