You are on page 1of 102

FCoE - Design, Operations and Management Best Practices

BRKSAN-2047

Before We Get Started


Intermediate level session focused on Unified Data Centre Design using Fibre Channel over Ethernet Prerequisites:
basic understanding of Fibre Channel and Storage Design basic understanding of Ethernet and LAN Design basic understanding of the FCoE protocol and terminology

Other recommended sessions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

The Evolving Data Centre Access


FC

The Access Layer is becoming more than just a port aggregator Edge of the growing Layer 2 topology
Scaling of STP Edge Ports Virtual embedded switches vPC and loop free designs Layer 2 Multi-Pathing (future)

Foundational element for Unified I/O and Unified Wire


DCB and Multi-Hop FCoE Support Enhanced Multi-hop FCoE with E-NPV

Single Point for Access Management


VN-Tag and Port Extension Nexus 2000 (current) VSM and VN-Link (future)

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3

blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3

blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Virtualized Edge/Access Layer

Core/Aggregation Layer

The Consolidated Nexus Edge Layer

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Why are we here?


Session Objectives Understand the design requirements of a Unified Network Be able to design single-hop Unified Networks available today which meet the demands of both SAN and LAN networks Start the conversation between Network and Storage teams regarding consolidation and FCoE beyond the access layer Understand the Operations and Management aspects of a Unified Network

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Data Centre Evolution


IEEE DCB (Data Centre Bridging)

Feature / Standard
Priority Flow Control IEEE 802.1Qbb (PFC) Bandwidth Management IEEE 802.1Qaz (ETS) Data Center Bridging Exchange Protocol (DCBX)

Standards Status
PAR approved, Editor Claudio DeSanti (Cisco), draft 1.0 published PAR approved, Editor Craig Carlson (Qlogic), draft 0.2 published This is part of: Bandwidth Management IEEE 802.1Qaz

CEE (Converged Enhanced Ethernet) is an informal group of companies that submitted initial inputs to the DCB WGs.
** Nexus 5000 supports CEE-DCBX as well as previous generations (CIN-DCBX)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Priority Flow Control


FCoE Flow Control Mechanism
Enables lossless Ethernet using PAUSE based on a COS as defined in 802.1p When link is congested, CoS assigned to no-drop will be PAUSED Other traffic assigned to other CoS values will continue to transmit and rely on upper layer protocols for retransmission Not only for FCoE traffic
Transmit Queues
Fibre Channel
One One Two Two Three Three
R_RDY
STOP

Ethernet Link

Receive Buffers
One One Two Two

PAUSE

Three Three Four Four Five Five Six Six Seven Seven Eight Eight

Four Four Five Five Six Six Seven Seven

Eight Virtual Lanes

B2B Credits
Presentation_ID

Packet

Eight Eight
2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Priority Flow Control


Operations Configuration Switch Level
Once feature fcoe is configured, 2 classes are made by default
DCB Switch

class-fcoe is configured to be no-drop with an MTU of 2158

Best Practice - use the default COS value of 3 for FCoE/no-drop traffic Can be changed through QOS class-map configuration
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

DCB CNA Adapter

10

Priority Flow Control


Verifying Configurations
Checking the PFC settings on an interface VL bmap = COS set for PFC
VL bmap 1 2 4
show interface priority-flow-control

Binary 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000

COS 0 1 2 3 4 5 6 7
11

8 16 32 64 128

Shows ports where PFC is configured, the COS value associated with PFC as well as the PAUSE packets received and sent on that port
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Enhanced Transmission Selection


Bandwidth Management
Prevents a single traffic class of hogging all the bandwidth and starving other classes When a given load doesnt fully utilize its allocated bandwidth, it is available to other classes Helps accommodate for classes of a bursty nature

Offered Traffic

10 GE Link Realized Traffic Utilization


3G/s HPC Traffic 3G/s 2G/s

3G/s

3G/s

2G/s

3G/s
3G/s 3G/s 3G/s

Storage Traffic 3G/s

3G/s

3G/s

4G/s

6G/s

3G/s

LAN Traffic 4G/s

5G/s

t1
Presentation_ID

t2

t3
2010 Cisco and/or its affiliates. All rights reserved.

t1
Cisco Public

t2

t3 12

Enhanced Transmission Selection


Bandwidth Management
Once feature fcoe is configured, 2 classes are made by default By default, each class is given 50% of the available bandwidth
1Gig FC HBAs

1Gig Ethernet NICs

Traditional Server

A typical server has equal BW per traffic type Best Practice : FCoE and Ethernet each receive 50% Can be changed through QoS settings when higher demands for certain traffic exist (i.e. HPC traffic, more Ethernet NICs)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

13

Data Center Bridging eXchange


Control Protocol
Discovers DCB capabilities of peer devices Negotiates Ethernet capabilitys PFC, ETS, CoS values Simplifies management of DCB nodes Allows for configuration and distribution of parameters from one node to another Responsible for Logical Link Up/Down signaling of Ethernet and Fibre Channel Uses Link Layer Discovery Protocol (LLDP) defined by 802.1AB to exchange and discover DCB capabilities DCBX negotiation failures result in: per-priority-pause not enabled on CoS values vfc not coming up when DCBX is being used in FCoE environment

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

14

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

15

Fibre Channel over Ethernet


FCoE Benefits

Mapping of FC frames over Ethernet Enables FC to run on a lossless Data Center Ethernet network
Ethernet Fibre Channel

Wire Server Once Fewer cables and adapters and switches Software Provisioning of I/O Interoperates with existing SANs No gatewaystateless Standard June 3, 2009

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

16

Fiber Channel over Ethernet


Protocol Mapping From a Fibre Channel standpoint its
FC connectivity over a new type of cable called Ethernet

From an Ethernet standpoints its


Yet another ULP (Upper Layer Protocol) to be transported

FC-4 ULP Mapping FC-3 Generic Services FC-2 Framing & Flow Control FC-1 Encoding FC-0 Physical Interface

FC-4 ULP Mapping FC-3 Generic Services FC-2 Framing & Flow Control FCoE Logical End Point Ethernet Media Access Control Ethernet Physical Layer

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

17

Unified Fabric
Fibre Channel over Ethernet (FCoE)
FCoE is Fibre Channel at the host and switch level
Easy to Understand Same Aligned with the Operational Model FC-BB-4 Model, Standardized in FC-BB-5 Same Techniques of Traffic Management Same Management and Security Models

Completely based on the FC model Same host-to-switch and switch-toswitch behavior of FC E.g., in order delivery or FSPF load balancing WWNs, FC-IDs, hard/soft zoning, DNS, RSCN

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

18

Fiber Channel over Ethernet


Data and Control plane

FCoE itself
Is the data plane protocol It is used to carry most of the FC frames and all the SCSI traffic Uses Fabric Assigned MAC address (dynamic)

FIP (FCoE Initialization Protocol)


It is the control plane protocol It is used to discover the FC entities connected to an Ethernet cloud It is also used to login to and logout from the FC fabric

Both Protocols Have


Two different Ethertypes Two different frame formats Both are defined in FC-BB-5
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public http://www.cisco.biz/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-560403.html 19

Fibre Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP) Protocol used by FCoE capable devices to discover other FCoE capable devices within the Ethernet Cloud
Enables FCoE adapters (CNAs) to discover FCoE switches (FCFs) on a VLAN (the FCoE VLAN) Establishes a virtual link with between the adapter and FCF or between two FCFs (VE_ports) accomplished with a FLOGI

FIP frames use a different Ethertype from FCoE frames making FIP-Snooping by DCB capable Ethernet bridges Building foundation for future multi-hop FCoE topologies
Multi-hop refers to FCoE extending beyond a single hop or access switch Today, multi-hop is achievable with a Nexus 4000 (FIP Snooping Bridge) connected to Nexus 5000 (FCF)

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

20

Fiber Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP)
Step 1: FCoE VLAN Discovery
FIP sends out a multicast to ALL_FCF_MAC address looking for the FCoE VLAN FIP frames use the native VLAN
Enode Initiator
VLAN Discover y
Sol icita tion

FCoE Switch FCF

VLAN Discovery

Step 2: FCF Discovery


FIP sends out a multicast to the ALL_FCF_MAC address on the FCoE VLAN to find the FCFs answering for that FCoE VLAN FCFs responds back with their MAC address
FCF Discovery
ement Advertis

FCF Discovery

FCoE Initialization Protocol (FIP)

Step 3: Fabric Login


FIP sends a FLOGI request to the FCF_MAC found in Step 2 Establishes a virtual link between host and FCF

FLOGI/F DISC

FLOGI/FDIS C Accept

FC Command

FC Command Responses

FCoE Protocol

** FIP does not carry any Fibre Channel frames


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

21

Fiber Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP)
The FCoE VLAN is manually configured on the Nexus 5000

The FCF-MAC address is configured on the Nexus 5000 by default once feature fcoe has been configured This is the MAC address returned in step 2 of the FIP exchange This MAC is used by the host to login to the FCoE fabric

** FIP does not carry any Fibre Channel frames


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

22

Fiber Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP)
Step 3 - login process: show flogi database and show fcoe database show the logins and associated FCiDs, xWWNs and FCoE MAC addresses

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

23

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

24

FCoE Building Blocks


The Acronyms Defined
FCF : Fibre Channel Forwarder (Nexus 5000, Nexus 7000, MDS 9000) FPMA : A unique MAC address that is assigned by an FCF to a single Enode Enode : a Fiber Channel node that is able to transmit FCoE frames using one or more ENode MACs. FCoE Pass-Through : a DCB device capable of passing FCoE frames to an FCF (i.e. FIP-Snooping) FIP Snooping Bridge Ethernet N-Port Virtualizer Single hop FCoE : running FCoE between the host and the first hop access level switch Multi-hop FCoE : the extension of FCoE beyond a single hop into the Aggregation and Core layers of the Data Centre Network
FCF

E_Node
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

25

Enode MAC Address


Fibre Channel over Ethernet Addressing Scheme
Domain ID

Enode MAC assigned for each FCID Enode MAC composed of a FC-MAP and FCID

FC Fabric

FC-MAP is the upper 24 bits of the Enodes MAC FCID is the lower 24 bits of the Enodes MAC
FCoE forwarding decisions still made based on FSPF and the FCID within the Enode MAC Fibre Channel FCID Addressing

FC-MAP (0E-FC-xx)

FC-ID 10.00.01

FC-MAC Address
Session_ID Presentation_ID 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

FC-MAP (0E-FC-xx)

FC-ID 7.8.9
26

FCoE Building Blocks


Converged Network Adapter
Replaces multiple adapters per server, consolidating both Ethernet and FC on a single interface Appears to the operation system as individual interfaces (NICs and HBAs) First Generation CNAs from support PFC and CIN-DCBX Second Generation CNAs support PFC, CEE-DCBX as well as FIP Single chip implementation
FC Driver bound to FC HBA PCI address

Fibre Channel Drivers

Operating System

10GbE Fibre Channel

Link

PCIe

10GbE Ethernet

Ethernet Driver bound to Ethernet NIC PCI address

Ethernet Drivers

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

27

FCoE Building Blocks


Fibre Channel Forwarder
FCF (Fibre Channel Forwarder) is the Fibre Channel forwarding element inside an FCoE switch
Fibre Channel logins (FLOGIs) happens at the FCF Consumes a Domain ID

FCoE encap/decap happens within the FCF Forwarding based on FC information


FCoE Switch
FC Domain ID : 15 FC port FC port FC port FC port Eth port
Presentation_ID

FCF Ethernet Bridge

Eth port

Eth port

Eth port

Eth port
Cisco Public

Eth port

Eth port

Eth port
28

2010 Cisco and/or its affiliates. All rights reserved.

FCoE Building Blocks


FCoE Port Types
Fibre Channel over Ethernet Switch

**EagleHawk Timeframe
FCF Switch VE_Port

**EagleHawk + Timeframe
VE_Port VF_Port VNP_Port
E_NPV Switch

VF_Port

VN_Port

End Node End

VF_Port

VN_Port Node

**Available NOW FCoE Switch : FCF


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

29

FCoE Building Blocks


The New BuzzzwordUnified
Unified I/O using Ethernet as the transport medium in all network environments -- no long needing separate cabling options for LAN and SAN networks Unified Wire a single DCB Ethernet link actively carrying both LAN and Storage (FC/FCoE/NAS/iSCSI) traffic simultaneously Unified Dedicate Wire -- a single DCB Ethernet link capable of carrying all traffic types but actively dedicated to a single traffic type for traffic engineering purposes Unified Fabric An Ethernet Network made up of Unified Wires everywhere: all protocols network and storage transverse all links simultaneously

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

30

FCoE Building Blocks


Unfied Wire vs Unified Dedicated Wire
Unified Wire to the access switch cost savings in the reduction of required equipment cable once for all servers to have access to both LAN and SAN networks Unified Dedicated Wire from access to aggregation separate links for SAN and LAN traffic - both links are same I/O (10GE) advanced Ethernet features can be applied to the LAN links maintains fabric isolation
CNA

Fabric A

Fabric B

Core

L3 L2

Aggregation

Unified Dedicated Wire

Shared Access

Unified Wire

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

31

FCoE Building Blocks


The Unified Fabric - Definition
A single network All links carry all types of traffic simultaneously all/any Storage and Network protocols Possible reduction of equipment leading to cost savings Abolition of Fabric A and Fabric B Single SAN fabric with redundant fabric services
L3 L2
Virtual PortChannel (VPC)

Ethernet an Storage traffic EVERYWHERE

Core

Aggregation

Access

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

32

Unified Technology
LAN and SAN networks share the same Unified I/O building blocks: switches and cabling maintains operations, management and troubleshooting

Native Ethernet LAN

Fibre Channel over Ethernet SAN


Core Fabric A Fabric B

L3 L2

Aggregation

Core

Virtual PortChannel (VPC)

Access

Edge

NIC / CNA Ether-channel


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

CNA

Multi-pathing
Cisco Public

33

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

34

The Design Requirements


Ethernet vs Fibre Channel Ethernet is non-deterministic.
Flow control is destination-based Relies on TCP drop-retransmission / sliding window

Fibre-Channel is deterministic.
Flow control is source-based (B2B credits) Services are fabric integrated (no loop concept)

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

35

The Design Requirements


Classical Ethernet
Ethernet/IP
Goal is to provide any-to-any connectivity Unaware of packet loss relies on ULPs for retransmission and windowing Provides the transport without worrying about the services East-west vs north-south traffic ratios are undefined Services provided by upper layers

? ?

? ? ? ? ? ?

Switch

Switch

Switch

? ? ?
Fabric topology and traffic flows are highly flexible

Network design has been optimized for:


Control protocol interaction (STP, OSPF, EIGRP, L2/L3 boundary, ) High Availability from a transport perspective by connecting nodes in mesh architectures High Availability for Services is implemented seperately

? ?

?
Client/Server Relationships are not pre-defined
36

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

The Design Requirements


LAN Design Access/Aggregation/Core
Servers typically dual homed to two or more access switches LAN switches have redundant connections to the next layer Distribution and Core can be collapsed into a single box L2/L3 boundary typically deployed in the aggregation layer
Spanning tree or advanced L2 technologies (vPC) used to prevent loops within the L2 boundary L3 routes are summarized to the core
L3 L2
Virtual PortChannel (VPC) STP

Outside Data Center cloud Core

Aggregation

Access

STP

Services deployed in the L2/L3 boundary of the network (loadbalancing, firewall, NAM, etc)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Virtual PortChannel (VPC)

37

The Design Requirements


Classical Fibre Channel
Fibre Channel SAN
Transport and Services are on the same layer in the same devices Well defined end device relationships (initiators and targets) Does not tolerate packet drop requires lossless transport Only north-south traffic, east-west traffic mostly irrelevant T0
DNS
Zone FSPF

T1

T2
FSPF

Zone

Switch
RSCN

Switch
DNS DNS
FSPF Zone RSCN

Switch RSCN

I5 I4

I0 I1 I2

I3

Network designs optimized for Scale and Availability


High availability of network services provided through dual fabric architecture SAN A and SAN B : physically separate and redundant fabrics Strict change isolation - end to end driver certification
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Fabric topology, services and traffic flows are structured

I(c) T(s) I(c)


Client/Server Relationships are pre-defined
38

The Design Requirements


SAN Design Two Tier Topology
Edge-Core Topology Servers connect to the edge switches Storage devices connect to one or more core switches Core switches provide stroage services to one or more edge switches, thus serviceing more servers in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC

Core

Core

39

The Design Requirements


SAN Design Three Tier Topology
FC

Edge-Core-Edge Topology For environments where future growth of the network has the number of storage devices exceeding the number of ports available at the core switch A set of edge switches dedicated to server connectivity and another set of dedicated for storage devices Extra edge can also be services edge for advanced network services Core is for transport only, rarely accommodates end nodes HA achieved with dual fabrics

Core

Core

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

40

The Design Requirements


Classical Ethernet + Classical Fibre Channel == ??
Question Do we build a FC network on top of an Ethernet Cloud? Or and Ethernet Network on top of a Fibre Channel Fabric? Unified Fabric design has to incorporate the super-set of requirements
Network -- Lossless and Lossfull Topologies Transport undefined (any-to-any) and defined (one-to-one) High Availability redundant network topology (mesh/full mesh) and physically separate redundant fabrics Bandwidth FC fan-in and oversubscription ratios and Ethernet oversubscription Security FC controls (zoning, port security, ) and IP controls (CISF, ACL, ) Manageability and Visibility Hop by hop visibility for FC and the cloud for Ethernet
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

? ?

? ? ? ? ? ?

Switch

Switch

Switch

? ? ?

T0
DNS
FSPF

T1

T2
FSPF

Switch
Zone RSCN

Switch
DNS
FSPF Zone

Zone

DNS RSCN

Switch RSCN

I5

I0 I1 I2

I3

I4
41

The Design Requirements


Classical Ethernet + Classical Fibre Channel == ?? Cant we just fold down the dotted line??
Outside Data Center cloud Core Fold Here
FC

L3 L2
STP

Aggregation

Virtual PortChannel (VPC)

Cor e

Cor e

Access
STP Virtual PortChannel (VPC)

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Fold Here

Cisco Public

42

The Design Requirements


FCoE Design Objectives
To expand the reach of Unified I/O
Blade Server environments (e.g. Nexus 4000 / UCS) Core and backbone links and devices

SAN scalability
Build-up the edge, from 20% attach-rate up to 100% Allow LAN and SAN to scale independently

To introduce the support for native FCoE Storage arrays Preserve SAN design best practices
Oversubscription, Fan-in ratios, hop count practices honored

Preserve SAN and LAN management models


Deterministic management of FC flows through all devices - No opaque LAN clouds transporting SAN traffic

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

43

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considersations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

44

Single Hop Design


Todays Solution
Host connected over unified wire to first hop access switch
Access switch (Nexus 5000) is the FCF Fibre Channel ports on the access switch can be in NPV or Switch mode for native FC traffic
Ethernet Fabric FC Fabric

Targe FC t

DCBX is used to negotiate the enhanced Ethernet capabilities FIP is use to negotiate the FCoE capabilities as well as the host login process FCoE runs from host to access switch FCF native Ethernet and native FC break off at the access layer
ENode
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

DCB capable Switch acting as an FCF Unified Wire

CNA

45

Single Hop Design


Unified Wire at the Access
The first phase of the Unified Fabric evolution design focused on the fabric edge Unified the LAN Access and the SAN Edge by using FCoE Consolidated Adapters, Cabling and Switching at the first hop in the fabrics The Unified Edge supports multiple LAN and SAN topology options
Virtualized Data Center LAN designs Fibre Channel edge with direct attached initiators and targets Fibre Channel edge-core and edgecore-edge designs Fibre Channel NPV edge designs The Unified Edge
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC

LAN Fabric

Fabric A

Fabric B

FCoE FC

Nexus 5000 FCF-A

Nexus 5000 FCF-B

LAN Access/SAN Edge

46

Single Hop Design


The CNA Point of View
Converged Network Adapter (CNA) presents two PCI address to the Operating System (OS) OS loads two unique sets of drivers and manages two unique application topologies Server participates in both topologies seperately Two stacks and thus two views of the same unified wire SAN Multi-Pathing provides failover between two fabrics (SAN A and SAN B) NIC Teaming provides failover within the same fabric (VLAN)
Nexus 5000 FCF-A

Nexus Edge participates in both distinct FC and IP Core topologies

Nexus 5000 FCF-B

Unified Wire shared by both FC and IP topologies

Nexus Unified Edge supports both FC and IP topologies

10GbE Fibre Channel

Link

10GbE Ethernet

FC Driver bound to FC HBA PCI address


Fibre Channel Drivers

PCIe

Ethernet Driver bound to Ethernet NIC PCI address

Ethernet Drivers

Operating System
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

47

Single Hop Design


The CNA Point of View
In this first phase we were limited to direct attached CNAs at the access Generation 1 CNA Utilized Cisco, Intel, Nuova Data Center Bridging Exchange protocol (CIN-DCBX) Only supports direct attachment of an VN_Port to an VF_Port over the unified wire Generation 2 CNA Utilizes Converged Enhanced Ethernet Data Center Bridging Exchange protocol (CEE-DCBX) Utilizes FCoE Initialization Protocol (FIP) as defined by the T.11 FC-BB-5 specification Supports both direct and multi-hop attachment
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

LAN Fabric

Fabric A

Fabric B

Nexus 5000 FCF-A

Nexus 5000 FCF-A

VF

CEE-DCBX
VN

Direct attach VN_Port to VF_Port

CIN-DCBX

Generation 2 CNA

Generation 1 CNA

Direct Attach Topologies


48

Single Hop Design


Attaching an Initiator
Physical link is brought up (today requires 10GE) DCBX negotiation discovers DCB capable devices and negotiates lossless Ethernet capabilities/configs FIP Process discovery and negotiation of FCoE devices and characteristics FCoE VLAN Discovery FCF Discovery on the specific FCoE VLAN Fabric Login - builds the logical wire from the end node to the FCF (VN_port to VF_port) FCoE traffic flows from host to target; LAN traffic flows
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

LAN Fabric

Fabric A

Fabric B

Nexus 5000 FCF-A VF VN

Nexus 5000 FCF-B

Direct attach VN_Port to VF_Port

Direct Attach Topologies


49

Single Hop Design


Two Distinct Topologies
Maintaining the two distinct edge/access topologies Isolated SAN edge switches: SAN A and SAN B LAN Access switches connected to the same LAN fabric and carrying the same VLANs Server participates in both topologies but may have different High Availability approaches SAN Multi-Pathing provides failover between two fabrics NIC Teaming provides failover to the same fabric (VLAN)
Three concurrent topologies: LAN, SAN A and SAN B
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC

LAN Fabric

Fabric A

Fabric B

VSAN 2

FCoE

VLAN 10
Nexus 5000 FCF-A

VSAN 3

Nexus 5000 FCF-B

VLAN 10,20 VLAN 10,30

50

Single Hop Design


The FCoE VLAN
A VLAN is dedicated for every VSAN in the fabric FIP is used to discover the FCoE VLAN and signal it to the hosts Trunking not required on the host driver all FCoE frames are tagged by the CNA FCoE VLANs must not be configured on Ethernet links that are not designate for FCoE Maintains isolated edge switches for SAN A and B and separate LAN switches for NIC 1 and NIC 2 (standard NIC teaming)
VSAN 2 VSAN 3

LAN Fabric

Fabric A

Fabric B

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20

STP Edge Trunk VLAN 10,30

! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

51

Single Hop Design


The FCoE VLAN
In order to maintain the integrity of FC forwarding over FCoE, FCoE VLANs are treated differently than LAN VLANs No flooding, MAC learning, broadcasts, etc. The FCoE VLAN must not be configured as a native VLAN FIP uses native VLAN Separate FCoE VLANs must be used for FCoE in SAN-A and SAN-B Unified Wires must be configured as trunk ports and STP edge ports
! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2 Nexus 5000 FCF

LAN Fabric

Fabric A

Fabric B

VSAN 2

VSAN 3

Nexus 5000 FCF

VLAN 10,20

STP Edge Trunk VLAN 10,30

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

52

Single Hop Design


The FCoE VLAN and STP
FCoE Fabric A will have a different VLAN topology than FCoE Fabric B which are different from the LAN Fabric PVRST+ allows unique topology per VLAN MST requires that all switches in the same Region have the same mapping of VLANs to instances MST does not require that all VLANs be defined in all switches A separate instance must be used for FCoE VLANs Recommended: three seperate instances native Ethernet VLANs, SAN A VLANs and SAN B VLANs
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

LAN Fabric

Fabric A

Fabric B

VSAN 2 VLAN 10 VSAN 3

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20 VLAN 10,30

spanning-tree mst configuration name FCoE-Fabric revision 5 instance 5 vlan 1-19,40-3967,4048-4093 instance 10 vlan 20-29 instance 15 vlan 30-39
Cisco Public

53

Single Hop Design


Unified Wires and MCEC
Optimal layer 2 LAN design often leverages Multi-Chassis Etherchannel (MCEC) Nexus utilizes Virtual Port Channel (vPC) to enable MCEC either between switches or to 802.3ad attached servers MCEC provides network based load sharing and redundancy without introducing layer 2 loops in the topology MCEC results in diverging LAN and SAN high availability topologies
FC maintains separate SAN A and SAN B topologies LAN utilizes a single logical topology
Nexus 5000 FCF-A

LAN Fabric

Fabric A

Fabric B

vPC Peer Link

Nexus 5000 FCF-B

vPC Peers

MCEC

Direct Attach vPC Topology


Cisco Public

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

54

Single Hop Design


Unified Wires and MCEC
In vPC enabled topologies in order to ensure correct forwarding behavior for SAN traffic specific design and forwarding rules must be followed
LAN Fabric Fabric A Fabric B

With the NX-OS 4.1(3) releases a vfc interface can only be associated with a vPC etherchannel with one (1) CNA port attached to each edge Nexus 5000 switch FCF-A While the port-channel is the same on N5K-1 and N5K-2, the FCoE VLANs are different vPC configuration works with Gen-2 FIP enabled CNAs ONLY FCoE VLANs are not carried on the vPC peer-link FCoE and FIP ethertypes are not forwarded over the vPC peer link
VLAN 10,20

VLAN 10 ONLY HERE!

Nexus 5000 FCF-B

STP Edge Trunk

VLAN 10,30

Direct Attach vPC Topology


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

55

Single Hop Design


Unsupported Topologies
Dual CNA (FC initiator) connected via an Etherchannel to a single edge switch is unsupported A vfc interface can only be bound to a port channel with one local interface
Nexus 5000 Not consistent with Fibre Channel High Availability design FCF-A requirements (No isolation of SAN A and SAN B)

LAN Fabric

Fabric A

Fabric B

VLAN 10,20,30

Nexus 5000 FCF-B

VLAN 10,20

VLAN 10,30

If SAN design evolves to a shared physical with only VSAN isolation for SAN A and B this could change (currently this appears to be a big if) ISLs between the Nexus 5000 access switches breaks SAN HA requirements
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Single homed dual CNA Direct Attach Topology


56

Single Hop Design


Introduction of 10Gig/FCoE Fabric Extender
32 server facing 10Gig/FCoE ports
T11 standard based FIP/FCoE support on all ports

Nexus 2232

8 10Gig/FCoE uplink ports for connections to the Nexus 5000

Remote Line Card of the Nexus 5000

FEX-2232

Management and configuration handled by the Nexus 5000 Support for Converged Enhanced Ethernet including PFC Part of the Cisco Nexus 2000 Fabric Extender family

Presentation_ID Cisco Systems, Inc. All rights reserved. its affiliates. All rights reserved. 2010 Cisco and/or Presentation_ID 2009

Cisco Public

57

Single Hop Design


Introduction of 10Gig/FCoE Fabric Extender
Parent Switch: Acts as the combined Supervisor and Switching Fabric for the virtual switch Fabric Links: Network Interface Forts (NIFs) Extends the Switching Fabric to the remote line card (Connects Nexus 5000 to Fabric Extender) Host Interfaces (HIF) Fabric connectivity between Nexus 5000 and Nexus 2000 (FEX) can leverage either pinning or port-channels
dc11-5020-1# show interface fex-fabric Fabric Fabric Fex FEX Fex Port Port State Uplink Model Serial --------------------------------------------------------------100 Eth1/17 Active 1 N2K-C2148T-1GE JAF1311AFLL 100 Eth1/18 Active 2 N2K-C2148T-1GE JAF1311AFLL 100 Eth1/19 Active 3 N2K-C2148T-1GE JAF1311AFLL 100 Eth1/20 Active 4 N2K-C2148T-1GE JAF1311AFLL 101 Eth1/21 Active 1 N2K-C2148T-1GE JAF1311AFMT 101 Eth1/22 Active 2 N2K-C2148T-1GE JAF1311AFMT
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Nexus 5000

FEX100

FEX101

58

Single Hop Design


Extending the Unified Access
SAN A

FEX-2232 extends the reach of 10Gig Ethernet/FCoE to distributed line card (ToR) Support for up to 384 10Gig/FCoE attached hosts managed by a single Nexus 5000 (FCS number may vary) Nexus 5000 is the FCF or can be in a FCoE pass-through mode (when supported) In the presence of FCoE -- Nexus 2232 needs to be single homed to upstream Nexus 5000 (straight through N2K) to ensure SAN A and SAN B isolation

SAN B

TE

TE Nexus Nexus 5000 5000

Nexus 2232 Nexus 2232 10GE FEX 10GE FEX

Nexus 2232 Nexus 2232 10GE FEX 10GE FEX

Requires FIP enabled CNAs


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

59

Single Hop Design


Extending the FCoE Edge Nexus 2232
SAN A

Server Ethernet driver connected to the FEX in NIC Teaming (AFT, TLB) or with vPC (802.3ad) FCoE runs over vPC member port with a single link from server to FEX FEX single homed to upstream Nexus 5000 FEX fabric links can be connected to Nexus 5000 with individual links (static pinning) or a port channel oversubscribed 4:1 Consistent with separate LAN Access and SAN Edge Topologies
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

SAN B

Nexus 5000 FCF-A Fabric Links Option 1: Single Homed Port Channel Nexus 2232 Nexus 2232 10GE FEX 10GE FEX

Nexus 5000 FCF-B Fabric Links Option 2: Static Pinned Nexus 2232 Nexus 2232 10GE FEX 10GE FEX

Server Option 2: FCoE on a vPC member PC with a single link

Server Option 1: FCoE on individual links. Ethernet traffic is Active/Standby

Requires FIP enabled CNAs


Cisco Public

60

Single Hop Design


Extending the FCoE Edge Nexus 2232
Nexus 2232 can not be configured in a dual homed configuration (vPC between two N5K) when configured to support FCoE attached servers
MCEC Port Channel will not keep SAN A and San B traffic isolated Nexus 2000 not supported with dedicated FCoE and dedicated IP upstream fabric links
Nexus 5000 Nexus 5000 Fabric Links: vPC Port Channel Nexus 2232 Nexus 2232 10GE FEX 10GE FEX Nexus 2232 Nexus 2232 10GE FEX 10GE FEX

SAN A

SAN B

Nexus 7000 Nexus 7000

Nexus 2232 can only currently be connected to the Nexus 5000 when configured to support FCoE attached servers
Nexus 7000 will support Nexus 2000 in Ethernet only mode in CY2010 (support for FCoE on FEX targeted for CY2011 on next generation N7K line cards)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

61

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

62

What is NPIV?
N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port allows multiple applications to share the same Fiber Channel adapter port different pWWN allows access control, zoning, and port security to be implemented at the application level usage applies to applications such as VMWare, MS Virtual Server and Citrix
Application Server FC NPIV Core Switch

Email

Email I/O N_Port_ID 1 Web I/O N_Port_ID 2 File Services I/O N_Port_ID 3

F_Port
F_Port

Web

File Services
Presentation_ID

N_Port
Cisco Public

2010 Cisco and/or its affiliates. All rights reserved.

63

What is NPV?
N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server performing multiple logins through a single physical link Physical servers connected to the NPV switch login to the upstream NPIV core switch
Physical uplink from NPV switch to FC NPIV core switch does actual FLOGI Subsequent logins are converted (proxy) to FDISC to login to upstream FC switch

No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode Does not take up a domain ID Scalability will be dependent on FC login limitation (MDS is ~10K per fabric)
Nexus 5000, MDS 91xx, MDS blade switches, UCS Fabric Interconnect
F-Port

FC NPIV Core Switch

Eth1/1

Server1 N_Port_ID 1 Server2 N_Port_ID 2 Server3 N_Port_ID 3

NP-Port

F-Port

Eth1/2

F_Port

Eth1/3
N-Port
Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

64

Multi - Hop Design


Considerations for FCoE Multi-hop
What design considerations do we have when extending FCoE beyond the Unified Edge?
High Availability for both LAN and SAN Oversubscription for SAN and LAN Ethernet Layer 2 and STP design
FCF FCF LAN Fabric Fabric A Fabric B

Where does Unified Wire make sense over Unified Dedicated Wire? Unified Wire provides for sharing of a single link for both FC and Ethernet traffic

DCB + FIP DCB + FIP Snooping Snooping Bridge Bridge

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

65

Multi - Hop Design


The Need for FCoE Pass Through
Growing FCoE fabrics is achived by connecting multiple FCoE capable devices together An FCF contains a domain ID
LAN Fabric Fabric A Fabric B

FCF FCF

DCB + FIP DCB + FIP Snooping Snooping Bridge Bridge

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

66

Multi - Hop Design


FCoE Pass-through options
Multi-hop FCoE networks allow for FCoE traffic to extend past the access layer (first hop) In Multi-hop FCoE the role of a transit Ethernet bridge needs to be DCB Capable DCB Capable evaluated Ethernet Ethernet
Avoid Domain ID exhaustion Ease management
Switch Switch

SAN A

SAN B

VF

FCF FCF

FCF FCF
VF VN

FIP Snooping is a minimum requirement suggested in FC-BB-5 Ethernet NPV (E-NPV) is a new capability intended to solve a number of design and management challenges

DCB Capable DCB Capable Ethernet Ethernet Switch Switch

VN

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

67

Multi - Hop Design


FIP-Snooping What is FIP-Snooping?
Efficient, automatic configuration of ACLs used to lock down the forwarding path accomplished by snooping FIP packets going from CNA to FCF

Why FIP-Snooping?
Security - Protection from MAC Address spoofing of FCoE end devices (ENode) Fibre Channel links are Point-to-Point Ethernet bridges can utilize ACLs to provide the equivalent path control (equivalent of point-to-point)

Support for FIP-Snooping?


Nexus 4000 (Blade switch for IBM BC H)

68
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

68

Multi - Hop Design


Extending FCoE with FIP Snooping
FIP Snooping Nexus 4000
Security (Protection from MAC Address spoofing of FCoE end devices ENode) Fibre Channel links are Point-to-Point Ethernet bridges can utilize ACLs to provide the equivalent path control (equivalent of point-to-point) FIP protocol allows for efficient automatic configuration of the ACLs necessary to lock down the forwarding path (FIP Snooping)
Spoofed MAC 0E.FC.00.DD.EE.FF

SAN

FCF FCF
FCF MAC 0E.FC.00.DD.EE.FF

FIP FIP Snooping Snooping

Ethernet-NPV (E-NPV) Future


Intelligent proxying FIP functions between a CNA and an FCF Added control to FCF logins/mappings and load-balancing

ENode MAC 0E.FC.00.07.08.09

ENode ENode

FIP Capable Multi-Hop Topology


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

69

Multi - Hop Design


Ethernet NPV Bridge
On the control plane (FIP ethertype), an Ethernet NPV bridge improves over a "FIP snooping bridge" by intelligently proxying FIP functions between a CNA and an FCF
- takes control of how a live network will build FCoE connectivity - makes the connectivity very predictable, without the need for an FCF at the next hop from the CNA

On the data plane (FCoE ethertype), an Ethernet NPV bridge offers more ways to engineer traffic between CNA-facing ports and FCF-facing ports An Ethernet NPV bridge knows nothing about Fibre Channel, and cant parse packets with FCoE ethertype

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

70

Multi - Hop Design


Ethernet NPV Bridge Proxys FIP functions between a CNA and an FCF
FCoE VLAN configuration and assignment FCF Assignment

E-NPV load balance logins from the CNAs evenly across the available FCF uplink ports
E-NPV will take VSAN into account when mapping or pinning logins from a CNA to an FCF uplink

Operations and management process are in line with todays SAN-Admin practices Similar to NPV in a native Fibre Channel network

**Name subject to change


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

71
71

Multi - Hop Design


Ethernet NPV - Enode Login Process FCoE Pass through device
All FCoE Switching is performed at the upstream FCF Addressing is pass out by the upstream FCF
Domain ID and FC-MAP come from the FCF

FABRIC A

Targe FC t

FCF
VF

FL OG

more FCoE connectivity to hosts without


Less-expensive Consistent management
FLOGI

I
VNP

E_NPV bridge
VF
E_NPV does not consume a domain ID

Running into the domain ID issue

VN

E_NPV is the FIP-Snooping Plus


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

E_Node MAC Address

72

Multi - Hop Design


Extending FCoE with FIP Snooping
Nexus 4000 is a Unified Fabric capable Blade Switch DCB enabled FIP Snooping Bridge Dual Topology requirements for FCoE multi-hop Servers IP connection to the Nexus 4000 is Active/Standby MCEC is not currently supported from blade server to Nexus 4000 Options 1: Unified Dedicated Wires from Nexus 4000 to Nexus 5000 Options 2: Single Unified Wire Port Channel from Nexus 4000 to Nexus 5000
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

SAN A

SAN B

Nexus 5000 FCF-A Option 2: Single Homed Unified Wire

Nexus 5000 FCF-B Option 1: Unified Dedicated Wire

Nexus 4000 FIP Snooping Bridge-A

Nexus 4000 FIP Snooping Bridge-B

10GbE Fibre Channel

Link Ethernet

PCIe

10GbE

Mezzanine Converged Network Adapter

73

Multi - Hop Design


Extending FCoE with VE_Ports
Extending FCoE Fibre Channel fabrics beyond direct attach initiators can be achieved in two basic ways Extend the Unified Edge (Stage 1) Add DCB enabled Ethernet switches between the VN and VF ports (stretch the link between the VN_Port and the VF_Port) Extend Unified Fabric capabilities into the SAN Core Leverage FCoE wires between Fibre Channel of Ethernet switches (VE_Ports)
LAN Fabric Fabric A Fabric B

MDS 9000 FCF-A

VE Using FCoE for ISL between FC Switches

Nexus 5000 FCF-A VE VE VF

VE

Nexus 5000 FCF-A VN

Extending FCoE into a multi-hop Ethernet Access Fabric

DCB + FIP DCB + FIP Snooping Snooping Bridge Bridge

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

74

Multi - Hop Design


Extending FCoE with Ethernet NPV
Two basic design options are possible when we deploy any FCoE multi-hop configuration Option 1 Unified Dedicated Wire Allows MCEC for IP/Ethernet Dedicated FCoE links for Storage Option 2 Unified Wire Leverage Server side failover mechanisms for both SAN and LAN Allows for Unified Wire beyond the Server to first device
Nexus 5000 FCF-A Option 2: Single Homed Unified Wire Nexus 5000 FCF-B Option 1: Dedicated Links and Topologies

SAN A

SAN B

E-NPV E-NPV

E-NPV E-NPV

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

75

Multi - Hop Design


Unsupported Topologies
SAN and LAN high availability design requirements are not always identical Optimal layer 2 LAN design may not meet FC high availability and operational design requirements Features such as vPC & MCEC are not viable and not supported beyond the direct attached server Server has two stacks and manages two topologies Layer 2 network has a single topology L2MP and TRILL provide options to change the design paradigm and come up with potential solutions
DCB DCB Enabled Enabled SAN A SAN B

FCF FCF

FIP and FcoE FIP and FcoE frames load frames load shared over shared over MCEC on a per MCEC on a per flow basis flow basis NO SAN A and NO SAN A and SAN B isolation SAN B isolation

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

76

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

77

FCoE Deployment Considerations


Dedicated Aggregation/Core Devices
Where is it efficient to leverage unified wire, shared links for both SAN and LAN traffic?
At the edge of the fabric the volume of end nodes allows for a greater degree of sharing for LAN and SAN In the core we will not reduce the number of links and will either maintain separate FC or FCoE links to the SAN core and Ethernet links to the LAN core

LAN and SAN HA models are very different (and not fully compatible) FC and FCoE are prone to HOLB in the network and therefore we are limited in the physical topologies we can build
e.g. 10 x 10G uplinks to LAN aggregation will require 10 x 10G links to a next hop SAN core (with targets attached) No savings, actually spending more to achieve this direct uplinks to SAN core

Targets are attached to the SAN core (the LAN aggregation and SAN core have different topology functions) Where is it more beneficial to deploy two cores SAN and LAN over a unified core topology
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

78

FCoE Deployment Considerations


Migration Strategy for FCoE
Migration to 10G FCoE in place of 4/8G FC links (Ethernet price per bit economics) Edge switch running as FCF with VE_ports connecting to FCF on Core switch
Must be careful of Domain ID creeping
SAN A SAN B

MDS 9000 FCF-B

VE Ports VE Ports

FSPF forwarding for FCoE traffic is end-to-end Hosts will log into the FCF which they are attached to (access FCF) Storage devices will log into the FCF at the core/storage edge Maintains HA requirements from both LAN and SAN perspective
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Nexus 5000 FCF-B

79

FCoE Deployment Considerations


Migration Strategy for FCoE
Migration to 10G FCoE in place of 4/8G FC links (Ethernet price per bit economics) Edge switch running either as FCF in NPV mode or in E-NPV mode with FCF migrating to the SAN Core Ethernet NPV (E-NPV) is a new construct intended to solve a number of system management problems Using E_NPV alleviates domain ID issue HA planning for the SAN side required
Does loosing a core switch mean the loss of a whole fabric? SAN A SAN B

FCF FCF

E-NPV E-NPV

FCF FCF

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

80

FCoE Deployment Considerations


Shared Aggregation/Core Devices
Does passing FCoE traffic through a larger aggregation point make sense? Multiple links required to support the HA models 1:1 ratio between access to aggregation and aggregation to SAN core is required SAN is more vulnerable to HOLB so need to plan for appropriate capacity in any core ISL When is a direct Edge to Core links for FCoE are more cost effective than adding another hop? Smaller Edge device more likely to be able to use under-provisioned uplinks
SAN A CORE Congestion on Agg-Core links will HOLB all attached edge devices SAN B

1:1 Ratio of links required unless E-NPV FCoE uplink is overprovisioned

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

81

FCoE Deployment Considerations


Shared Aggregation/Core Devices
Different requirements for LAN and SAN network designs Factors that will influence this use case
Port density Operational roles and change management Storage device types
Multiple VDCs FCoE SAN LAN Agg LAN Core CORE Direct Attach FCoE Targets

Potentially viable for smaller environments Larger environments will need dedicated FCoE SAN devices providing target ports
Use connections to a SAN Use a storage edge of other FCoE/DCB capable devices
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Nexus 5000 FCF-A

Nexus 5000 FCF-B

82

FCoE Deployment Considerations


Dedicated Aggregation/Core Devices
Topology will vary based on scale (single vs multiple tiers) Architecture as defined for product development has a dual core Question - where is the demarc between Unified Wire and Unified Fabric As the topology grows less Unified Wire In all practical terms the edge is the unified point for LAN and SAN (not the core/agg) In smaller topologies where core and edge merge then everything collapses but the essential design elements remain
CORE SAN A SAN B

Dedicated SAN and LAN Core FCF FCF


VLAN 10,20

FCF FCF
VLAN 10,30

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

83

Virtualized Access Switch - FCoE


FCoE - Unified Wires at the Edge
MCEC results in diverging LAN and SAN high availability topologies
FC maintains separate SAN A and SAN B topologies LAN utilizes a single logical topology
SAN A SAN B

In vPC enabled topologies in order to ensure correct forwarding behavior for SAN traffic specific design and forwarding rules must be followed While the port-channel is the same on N5K-1 and N5K-2, the FCoE VLANs are different vPC configuration works with Gen-2 FIP enabled CNAs ONLY FCoE VLANs are not carried on the vPC peer-link FCoE and FIP ethertypes are not forwarded over the vPC peer link

vPC Peer Link VLAN 10 ONLY HERE!

N5K1 N5K1

N5K2 N5K2

VLAN 10,20

vPC Peers STP Edge Trunk MCEC for IP Only VLAN 10 VLAN 10,30

Direct Attach vPC Topology


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

84

Virtualized Access Switch - FCoE


Extending FCoE Nexus 2232
SAN A

FEX-2232 extends the reach of 10Gig Ethernet/FCoE to distributed line card (ToR) Support for up to 384 10Gig/FCoE attached hosts managed by a single Nexus 5000 Nexus 5000 is the FCF or can be in FIP Snooping + mode (when supported) Currently Nexus 2232 needs to be single homed to upstream Nexus 5000 (straight through N2K) to ensure SAN A and SAN B isolation Server Ethernet driver connected to the FEX in NIC Teaming (AFT, TLB) or with vPC (802.3ad)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

SAN B

Nexus 5000 as FCF or as E-NPV device Nexus 5000 Nexus 5000 Fabric Links Option 1: Single Homed Port Channel Nexus 2232 Nexus 2232 10GE FEX 10GE FEX Nexus 5000 Nexus 5000 Fabric Links Option 2: Static Pinned Nexus 2232 Nexus 2232 10GE FEX 10GE FEX

Server Option 2: FCoE on a vPC member PC with a single link

Server Option 1: FCoE on individual links. Ethernet traffic is Active/Standby

Requires FIP enabled CNAs


Cisco Public

85

FCoE Multi-Tier Fabric Design


Extending FCoE past the Unified Edge
Extending FCoE Fibre Channel fabrics beyond direct attach initiators can be achieved in two basic ways Extend the Unified Edge Add DCB enabled Ethernet switches between the VN and VF ports (stretch the link between the VN_Port and the VF_Port) Extend Unified Fabric capabilities into the SAN Core Leverage FCoE wires between Fibre Channel switches (VE_Ports) What design considerations do we have when extending FCoE beyond the edge? High Availability Oversubscription for SAN and LAN Ethernet layer 2 and STP design
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.

LAN Fabric

Fabric A

Fabric B FCF FCF


VE Using FCoE for ISL between FC Switches

VE

VF

FCF FCF Switch Mode Switch Mode


Extending FCoE into a multi-hop Ethernet Access Fabric

VN

DCB + FIP DCB + FIP Snooping Snooping Bridge Bridge

Please see session BRKSAN-2047 - Storage and the Unified Fabric for more information on FCoE
Cisco Public

86

Unified Fabric and HA Design


Extending FCoE past the Edge Current State
Nexus 4000 is a Unified Fabric capable Blade Switch DCB enabled FIP Snooping Bridge Dual Topology requirements for FCoE multi-hop Servers IP connection to the Nexus 4000 is Active/Standby vPC is not currently supported from blade server to Nexus 4000 Separate dedicated FCoE links run from Nexus 4000 to Nexus 5000 Single homed Port Channel supported for N4K to N5K FCoE uplinks
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

SAN A

SAN B

FCF FCF
Option 2: Single Homed Unified Wire

FCF FCF
Option 1: Dedicated Links and Topologies

FIP Snooping FIP Snooping


Nexus 4000 Nexus 4000 DCB Blade DCB Blade Switch Switch
10GbE Fibre Channel 10GbE Link Ethernet PCIe

FIP Snooping FIP Snooping


Nexus 4000 Nexus 4000 DCB Blade DCB Blade Switch Switch

Mezzanine Converged Network Adapter

87

Virtualized Access Switch - FCoE


Larger Fabric Multi-Hop Topologies
Multi-hop edge/core/edge topology Core SAN switches supporting FCoE N7K with DCB/FCoE line cards MDS with FCoE line cards (Sup2A) Edge FC switches supporting either N5K - E-NPV with FCoE uplinks to the FCoE enabled core (VNP to VF) N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE) Scaling of the fabric (FLOGI, ) will most likely drive the selection of which mode to deploy
Servers, FCoE Servers, FCoE attached Storage attached Storage Servers Servers FC Attached FC Attached Storage Storage VE
Edge FCF Edge FCF Switch Mode Switch Mode

N7K or MDS FCoE enabled Fabric Switches

VE

VF

VE

VNP

VE

Please see session BRKSAN-2047 - Storage and the Unified Fabric for more information on FCoE
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Edge Switch Edge Switch in E-NPV in E-NPV Mode Mode

88

Evolution of the Data Centre Architecture


Virtualized Access Layer
The Evolving Data Centre Access Where is the edge? Phase 1 Physical Virtualization (Nexus 2000) Decoupling Layer 1 and Layer 2 vPC Redundancy in the Access Design Considerations Phase 2 Hypervisor Virtualized Switching (Nexus 1000v) Components of Nexus 1000v Design Considerations Phase 3 Unifying the Fabric (Nexus & FCoE) Integrating the Unified Compute Fabric The Virtualized Access Layer
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

89

Evolution of the DC Access Architecture


UCS 6100 End Host Mode
UCS Fabric Interconnect supports two modes of operation Switch Mode End Host Mode <- Recommended In End Host Mode the Fabric Interconnects dont function like regular LAN switches They dont forward frames based on destination MAC addresses They dont run spanning-tree! They dont learn MAC addresses from external LAN switches Forwarding is based on server-touplink pinning Acts a true Layer 2 stub device and never reflects traffic back upstream Loop-free topology without STP
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Aggregation/C ore

Access/ Edge Layer

Spanning Tree - Rapid PVST+ or MST

Spanning Tree Edge Ports


blade1 slot 1 blade2 slot 2 blade3 slot 3 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

6100 - EHM

blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

90

FCoE Multi-Tier
Potential Migration for UCS
FCoE design will follow the same evolution Should 6100 become a fabric switch Technically viable but does it add too much operational complexity (e.g. SW certification cycles, operational change management, ) Migration from NPV to E-NPV mode Again key question is where is the demarcation between Unified Wire and Unified Fabric Note: 6100 Fabric HA required due to lack of vPC uplinks when running Unified Wire from 6100 to next hop
CORE SAN A SAN B

Dedicated SAN and LAN Core FCF FCF


VLAN 10,20

FCF FCF
VLAN 10,30

E-NPV E-NPV
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

91

FCoE Multi-Tier
Potential Migration for UCS
Smaller scale What do we collapse to? What is the migration to FCoE targets Again key question is where is the demarc between Unified Wire and Unified Fabric If we direct attach targets do we need to support 6100 as a switch in a multi-switch fabric? Is Option 1 viable only for standalone implementations?
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

CORE

Option 2 ENPV

Option 1 FC/FCoE targets

SAN B Switch Mode Switch Mode E-NPV E-NPV

FC Switch Mode FC Switch Mode

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

92

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Future?? Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

93

FCoE in the Future


What about L2MP?
L2MP provides the potential to change the design of the Data Center Fabric L2MP is based on an edge/spine topology Multiple forwarding topologies can be supported Edge/Spine architectures Unique topologies for different forwarding groups (VLANs) are possible High Availability design rules change, ECMP and routed design Traffic capacity planning can change as well due to new load sharing capabilities May still find design value in multiple cores and dedicated links
L2MP Edge L2MP Spine Multi Topology L2MP Supporting SAN A, SAN B and Scalable LAN L2MP Edge

FCF FCF

E-NPV E-NPV

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

94

FCoE in the Future


L2MP -- Unified Wire vs Unified Dedicated Wire
Does L2MP with multi-topology support change this? Still have two oversubscription models and traffic patterns to analyze Multi-topology L2MP fixes the MCEC problem by allowing per VLAN topologies (unique SAN A & B and LAN) Traffic Capacity Planning becomes far more complex as measured load varies amongst ECMP links Unified Dedicated Wires are still the recommendation: Provides better traffic isolation than VSANs/VLANs: different ports, different protocols!
These 2 links are more prone to congestion FCF FCF
VLAN 10,20

SAN A

SAN B

FCF FCF
VLAN 10,30

E-NPV E-NPV

VLAN 10 E-NPV ONLY E-NPV HERE!

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

95

FCoE Multi-Tier Fabric Design


How will Future LAN capabilities change the HA design options
L2MP is based on an edge/spine/edge topology Larger SAN topologies utilize an edge/core/edge designs High Availability design rules change, ECMP and routed design With L2MP multiple forwarding topologies can be supported
Traffic capacity planning can change as well due to new load sharing capabilities Dedicated or shared links for Storage, IP, vMotion, backup, .
Edge Edge Core

Multi Topology L2MP Supporting SAN A, SAN B and LAN

Common edge/core/edge for both NAS and FC/FCoE Storage


Consistent low latency from any server to any storage (cut-thru) L2MP provides potential for very large capacity designs
NAS and FCoE NAS and FCoE Attached Attached Storage Storage Servers Leveraging Servers Leveraging Bock and File Based Bock and File Based Storage Storage FC Attached FC Attached Storage Storage

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

96

FCoE in the Future


What about FC-BB-6?
Redefinition of the Fabric Services and associated design requirements We discussed it in FC-BB-5, it was too much work and we delayed, it is a good idea (it may happen in FC-BB-6 - that is probably 2 years away) Prior to that operational changes may allow a different design approach that will meet both HA requirements (VSAN/VLAN isolation, L2MP and LAN design evolution, changing SAN support matrix, ) Inside the cloud the data plane is pure Ethernet FC services can run on any server

Ethernet Cloud

Initiators and targets are connected directly to the Ethernet cloud


Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Zoning is enforced at the edge of the cloud


97

Why Nexus Edge Layer


SAN A SAN B

Uniform Network Fabric supporting Heterogeneous Compute Environments

Unified Access Layer

10G DCB FCoE

Multi-Hop FCoE

Spanning Tree Edge Ports

VM VM NEXUS VM 1000v VM VM VM VM VM NEXUS VM 1000v VM VM VM

VM VM NEXUS VM 1000v VM VM VM

VM VM NEXUS VM 1000v VM VM VM

VM VM NEXUS VM 1000v VM VM VM

blade1 slot 1 slot 2 blade2 blade3 slot 3 slot 4 blade4 blade5 slot 5 slot 6 blade6 blade7 slot 7 slot 8 blade8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 slot 2 blade2 blade3 slot 3 slot 4 blade4 blade5 slot 5 slot 6 blade6 blade7 slot 7 slot 8 blade8

VM VM NEXUS VM 1000v VM VM VM

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

VM VM NEXUS VM 1000v VM VM VM

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 slot 2 blade2 blade3 slot 3 slot 4 blade4 blade5 slot 5 slot 6 blade6 blade7 slot 7 slot 8 blade8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

1G and 10GE Rack Mount Servers


Presentation_ID

1G and 10GE Blade Servers Pass-Thru HP/IBM/Dell

N4K - DCB Blade Switch IBM/Dell

10GE Blade (HP)


Cisco Public

UCS Compute Pod

UCS Compute Pod


98

2010 Cisco and/or its affiliates. All rights reserved.

Virtualized Edge/Access Layer

Core/Aggregation Layer

Consistent Architecture for Heterogeneous Environments


Core

Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

99

Complete Your Online Session Evaluation


Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Cisco Preferred Access points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.

Dont forget to activate your Cisco Live and Networkers Virtual account for access to all session materials, communities, and on-demand and live activities throughout the year. Activate your account at any internet station or visit www.ciscolivevirtual.com.
Cisco Public

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

100

Enter to Win a 12-Book Library of Your Choice from Cisco Press

Visit the Cisco Store in the World of Solutions, where you will be asked to enter this Session ID code

Check the Recommended Reading brochure for suggested products available at the Cisco Store

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

101

Presentation_ID

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

102

You might also like