Professional Documents
Culture Documents
BRKSAN-2047
Presentation_ID
Cisco Public
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
The Access Layer is becoming more than just a port aggregator Edge of the growing Layer 2 topology
Scaling of STP Edge Ports Virtual embedded switches vPC and loop free designs Layer 2 Multi-Pathing (future)
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3
blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
Presentation_ID
Cisco Public
Core/Aggregation Layer
Presentation_ID
Cisco Public
Presentation_ID
Cisco Public
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
Feature / Standard
Priority Flow Control IEEE 802.1Qbb (PFC) Bandwidth Management IEEE 802.1Qaz (ETS) Data Center Bridging Exchange Protocol (DCBX)
Standards Status
PAR approved, Editor Claudio DeSanti (Cisco), draft 1.0 published PAR approved, Editor Craig Carlson (Qlogic), draft 0.2 published This is part of: Bandwidth Management IEEE 802.1Qaz
CEE (Converged Enhanced Ethernet) is an informal group of companies that submitted initial inputs to the DCB WGs.
** Nexus 5000 supports CEE-DCBX as well as previous generations (CIN-DCBX)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
Ethernet Link
Receive Buffers
One One Two Two
PAUSE
Three Three Four Four Five Five Six Six Seven Seven Eight Eight
B2B Credits
Presentation_ID
Packet
Eight Eight
2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
Best Practice - use the default COS value of 3 for FCoE/no-drop traffic Can be changed through QOS class-map configuration
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
10
COS 0 1 2 3 4 5 6 7
11
8 16 32 64 128
Shows ports where PFC is configured, the COS value associated with PFC as well as the PAUSE packets received and sent on that port
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Offered Traffic
3G/s
3G/s
2G/s
3G/s
3G/s 3G/s 3G/s
3G/s
3G/s
4G/s
6G/s
3G/s
5G/s
t1
Presentation_ID
t2
t3
2010 Cisco and/or its affiliates. All rights reserved.
t1
Cisco Public
t2
t3 12
Traditional Server
A typical server has equal BW per traffic type Best Practice : FCoE and Ethernet each receive 50% Can be changed through QoS settings when higher demands for certain traffic exist (i.e. HPC traffic, more Ethernet NICs)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
13
Presentation_ID
Cisco Public
14
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
15
Mapping of FC frames over Ethernet Enables FC to run on a lossless Data Center Ethernet network
Ethernet Fibre Channel
Wire Server Once Fewer cables and adapters and switches Software Provisioning of I/O Interoperates with existing SANs No gatewaystateless Standard June 3, 2009
Presentation_ID
Cisco Public
16
FC-4 ULP Mapping FC-3 Generic Services FC-2 Framing & Flow Control FC-1 Encoding FC-0 Physical Interface
FC-4 ULP Mapping FC-3 Generic Services FC-2 Framing & Flow Control FCoE Logical End Point Ethernet Media Access Control Ethernet Physical Layer
Presentation_ID
Cisco Public
17
Unified Fabric
Fibre Channel over Ethernet (FCoE)
FCoE is Fibre Channel at the host and switch level
Easy to Understand Same Aligned with the Operational Model FC-BB-4 Model, Standardized in FC-BB-5 Same Techniques of Traffic Management Same Management and Security Models
Completely based on the FC model Same host-to-switch and switch-toswitch behavior of FC E.g., in order delivery or FSPF load balancing WWNs, FC-IDs, hard/soft zoning, DNS, RSCN
Presentation_ID
Cisco Public
18
FCoE itself
Is the data plane protocol It is used to carry most of the FC frames and all the SCSI traffic Uses Fabric Assigned MAC address (dynamic)
FIP frames use a different Ethertype from FCoE frames making FIP-Snooping by DCB capable Ethernet bridges Building foundation for future multi-hop FCoE topologies
Multi-hop refers to FCoE extending beyond a single hop or access switch Today, multi-hop is achievable with a Nexus 4000 (FIP Snooping Bridge) connected to Nexus 5000 (FCF)
Presentation_ID
Cisco Public
20
VLAN Discovery
FCF Discovery
FLOGI/F DISC
FLOGI/FDIS C Accept
FC Command
FC Command Responses
FCoE Protocol
21
The FCF-MAC address is configured on the Nexus 5000 by default once feature fcoe has been configured This is the MAC address returned in step 2 of the FIP exchange This MAC is used by the host to login to the FCoE fabric
22
Presentation_ID
Cisco Public
23
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks and Terminology Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
24
E_Node
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
25
Enode MAC assigned for each FCID Enode MAC composed of a FC-MAP and FCID
FC Fabric
FC-MAP is the upper 24 bits of the Enodes MAC FCID is the lower 24 bits of the Enodes MAC
FCoE forwarding decisions still made based on FSPF and the FCID within the Enode MAC Fibre Channel FCID Addressing
FC-MAP (0E-FC-xx)
FC-ID 10.00.01
FC-MAC Address
Session_ID Presentation_ID 2008 Cisco Systems, Inc. All rights reserved. Cisco Public
FC-MAP (0E-FC-xx)
FC-ID 7.8.9
26
Operating System
Link
PCIe
10GbE Ethernet
Ethernet Drivers
Presentation_ID
Cisco Public
27
Eth port
Eth port
Eth port
Eth port
Cisco Public
Eth port
Eth port
Eth port
28
**EagleHawk Timeframe
FCF Switch VE_Port
**EagleHawk + Timeframe
VE_Port VF_Port VNP_Port
E_NPV Switch
VF_Port
VN_Port
VF_Port
VN_Port Node
29
Presentation_ID
Cisco Public
30
Fabric A
Fabric B
Core
L3 L2
Aggregation
Shared Access
Unified Wire
Presentation_ID
Cisco Public
31
Core
Aggregation
Access
Presentation_ID
Cisco Public
32
Unified Technology
LAN and SAN networks share the same Unified I/O building blocks: switches and cabling maintains operations, management and troubleshooting
L3 L2
Aggregation
Core
Access
Edge
CNA
Multi-pathing
Cisco Public
33
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
34
Fibre-Channel is deterministic.
Flow control is source-based (B2B credits) Services are fabric integrated (no loop concept)
Presentation_ID
Cisco Public
35
? ?
? ? ? ? ? ?
Switch
Switch
Switch
? ? ?
Fabric topology and traffic flows are highly flexible
? ?
?
Client/Server Relationships are not pre-defined
36
Presentation_ID
Cisco Public
Aggregation
Access
STP
Services deployed in the L2/L3 boundary of the network (loadbalancing, firewall, NAM, etc)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
37
T1
T2
FSPF
Zone
Switch
RSCN
Switch
DNS DNS
FSPF Zone RSCN
Switch RSCN
I5 I4
I0 I1 I2
I3
FC
Core
Core
39
Edge-Core-Edge Topology For environments where future growth of the network has the number of storage devices exceeding the number of ports available at the core switch A set of edge switches dedicated to server connectivity and another set of dedicated for storage devices Extra edge can also be services edge for advanced network services Core is for transport only, rarely accommodates end nodes HA achieved with dual fabrics
Core
Core
Presentation_ID
Cisco Public
40
? ?
? ? ? ? ? ?
Switch
Switch
Switch
? ? ?
T0
DNS
FSPF
T1
T2
FSPF
Switch
Zone RSCN
Switch
DNS
FSPF Zone
Zone
DNS RSCN
Switch RSCN
I5
I0 I1 I2
I3
I4
41
L3 L2
STP
Aggregation
Cor e
Cor e
Access
STP Virtual PortChannel (VPC)
Presentation_ID
Fold Here
Cisco Public
42
SAN scalability
Build-up the edge, from 20% attach-rate up to 100% Allow LAN and SAN to scale independently
To introduce the support for native FCoE Storage arrays Preserve SAN design best practices
Oversubscription, Fan-in ratios, hop count practices honored
Presentation_ID
Cisco Public
43
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considersations Questions
Presentation_ID
Cisco Public
44
Targe FC t
DCBX is used to negotiate the enhanced Ethernet capabilities FIP is use to negotiate the FCoE capabilities as well as the host login process FCoE runs from host to access switch FCF native Ethernet and native FC break off at the access layer
ENode
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
CNA
45
FC
LAN Fabric
Fabric A
Fabric B
FCoE FC
46
Link
10GbE Ethernet
PCIe
Ethernet Drivers
Operating System
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
47
LAN Fabric
Fabric A
Fabric B
VF
CEE-DCBX
VN
CIN-DCBX
Generation 2 CNA
Generation 1 CNA
LAN Fabric
Fabric A
Fabric B
FC
LAN Fabric
Fabric A
Fabric B
VSAN 2
FCoE
VLAN 10
Nexus 5000 FCF-A
VSAN 3
50
LAN Fabric
Fabric A
Fabric B
VLAN 10,20
! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2
Presentation_ID
Cisco Public
51
LAN Fabric
Fabric A
Fabric B
VSAN 2
VSAN 3
VLAN 10,20
Presentation_ID
Cisco Public
52
LAN Fabric
Fabric A
Fabric B
spanning-tree mst configuration name FCoE-Fabric revision 5 instance 5 vlan 1-19,40-3967,4048-4093 instance 10 vlan 20-29 instance 15 vlan 30-39
Cisco Public
53
LAN Fabric
Fabric A
Fabric B
vPC Peers
MCEC
Presentation_ID
54
With the NX-OS 4.1(3) releases a vfc interface can only be associated with a vPC etherchannel with one (1) CNA port attached to each edge Nexus 5000 switch FCF-A While the port-channel is the same on N5K-1 and N5K-2, the FCoE VLANs are different vPC configuration works with Gen-2 FIP enabled CNAs ONLY FCoE VLANs are not carried on the vPC peer-link FCoE and FIP ethertypes are not forwarded over the vPC peer link
VLAN 10,20
VLAN 10,30
55
LAN Fabric
Fabric A
Fabric B
VLAN 10,20,30
VLAN 10,20
VLAN 10,30
If SAN design evolves to a shared physical with only VSAN isolation for SAN A and B this could change (currently this appears to be a big if) ISLs between the Nexus 5000 access switches breaks SAN HA requirements
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
Nexus 2232
FEX-2232
Management and configuration handled by the Nexus 5000 Support for Converged Enhanced Ethernet including PFC Part of the Cisco Nexus 2000 Fabric Extender family
Presentation_ID Cisco Systems, Inc. All rights reserved. its affiliates. All rights reserved. 2010 Cisco and/or Presentation_ID 2009
Cisco Public
57
Nexus 5000
FEX100
FEX101
58
FEX-2232 extends the reach of 10Gig Ethernet/FCoE to distributed line card (ToR) Support for up to 384 10Gig/FCoE attached hosts managed by a single Nexus 5000 (FCS number may vary) Nexus 5000 is the FCF or can be in a FCoE pass-through mode (when supported) In the presence of FCoE -- Nexus 2232 needs to be single homed to upstream Nexus 5000 (straight through N2K) to ensure SAN A and SAN B isolation
SAN B
TE
59
Server Ethernet driver connected to the FEX in NIC Teaming (AFT, TLB) or with vPC (802.3ad) FCoE runs over vPC member port with a single link from server to FEX FEX single homed to upstream Nexus 5000 FEX fabric links can be connected to Nexus 5000 with individual links (static pinning) or a port channel oversubscribed 4:1 Consistent with separate LAN Access and SAN Edge Topologies
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
SAN B
Nexus 5000 FCF-A Fabric Links Option 1: Single Homed Port Channel Nexus 2232 Nexus 2232 10GE FEX 10GE FEX
Nexus 5000 FCF-B Fabric Links Option 2: Static Pinned Nexus 2232 Nexus 2232 10GE FEX 10GE FEX
60
SAN A
SAN B
Nexus 2232 can only currently be connected to the Nexus 5000 when configured to support FCoE attached servers
Nexus 7000 will support Nexus 2000 in Ethernet only mode in CY2010 (support for FCoE on FEX targeted for CY2011 on next generation N7K line cards)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
61
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
62
What is NPIV?
N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port allows multiple applications to share the same Fiber Channel adapter port different pWWN allows access control, zoning, and port security to be implemented at the application level usage applies to applications such as VMWare, MS Virtual Server and Citrix
Application Server FC NPIV Core Switch
Email I/O N_Port_ID 1 Web I/O N_Port_ID 2 File Services I/O N_Port_ID 3
F_Port
F_Port
Web
File Services
Presentation_ID
N_Port
Cisco Public
63
What is NPV?
N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server performing multiple logins through a single physical link Physical servers connected to the NPV switch login to the upstream NPIV core switch
Physical uplink from NPV switch to FC NPIV core switch does actual FLOGI Subsequent logins are converted (proxy) to FDISC to login to upstream FC switch
No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode Does not take up a domain ID Scalability will be dependent on FC login limitation (MDS is ~10K per fabric)
Nexus 5000, MDS 91xx, MDS blade switches, UCS Fabric Interconnect
F-Port
Eth1/1
NP-Port
F-Port
Eth1/2
F_Port
Eth1/3
N-Port
Presentation_ID
Cisco Public
64
Where does Unified Wire make sense over Unified Dedicated Wire? Unified Wire provides for sharing of a single link for both FC and Ethernet traffic
Presentation_ID
Cisco Public
65
FCF FCF
Presentation_ID
Cisco Public
66
SAN A
SAN B
VF
FCF FCF
FCF FCF
VF VN
FIP Snooping is a minimum requirement suggested in FC-BB-5 Ethernet NPV (E-NPV) is a new capability intended to solve a number of design and management challenges
VN
Presentation_ID
Cisco Public
67
Why FIP-Snooping?
Security - Protection from MAC Address spoofing of FCoE end devices (ENode) Fibre Channel links are Point-to-Point Ethernet bridges can utilize ACLs to provide the equivalent path control (equivalent of point-to-point)
68
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
68
SAN
FCF FCF
FCF MAC 0E.FC.00.DD.EE.FF
ENode ENode
69
On the data plane (FCoE ethertype), an Ethernet NPV bridge offers more ways to engineer traffic between CNA-facing ports and FCF-facing ports An Ethernet NPV bridge knows nothing about Fibre Channel, and cant parse packets with FCoE ethertype
Presentation_ID
Cisco Public
70
E-NPV load balance logins from the CNAs evenly across the available FCF uplink ports
E-NPV will take VSAN into account when mapping or pinning logins from a CNA to an FCF uplink
Operations and management process are in line with todays SAN-Admin practices Similar to NPV in a native Fibre Channel network
71
71
FABRIC A
Targe FC t
FCF
VF
FL OG
I
VNP
E_NPV bridge
VF
E_NPV does not consume a domain ID
VN
72
SAN A
SAN B
Link Ethernet
PCIe
10GbE
73
VE
Presentation_ID
Cisco Public
74
SAN A
SAN B
E-NPV E-NPV
E-NPV E-NPV
Presentation_ID
Cisco Public
75
FCF FCF
FIP and FcoE FIP and FcoE frames load frames load shared over shared over MCEC on a per MCEC on a per flow basis flow basis NO SAN A and NO SAN A and SAN B isolation SAN B isolation
Presentation_ID
Cisco Public
76
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
77
LAN and SAN HA models are very different (and not fully compatible) FC and FCoE are prone to HOLB in the network and therefore we are limited in the physical topologies we can build
e.g. 10 x 10G uplinks to LAN aggregation will require 10 x 10G links to a next hop SAN core (with targets attached) No savings, actually spending more to achieve this direct uplinks to SAN core
Targets are attached to the SAN core (the LAN aggregation and SAN core have different topology functions) Where is it more beneficial to deploy two cores SAN and LAN over a unified core topology
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
78
VE Ports VE Ports
FSPF forwarding for FCoE traffic is end-to-end Hosts will log into the FCF which they are attached to (access FCF) Storage devices will log into the FCF at the core/storage edge Maintains HA requirements from both LAN and SAN perspective
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
79
FCF FCF
E-NPV E-NPV
FCF FCF
Presentation_ID
Cisco Public
80
Presentation_ID
Cisco Public
81
Potentially viable for smaller environments Larger environments will need dedicated FCoE SAN devices providing target ports
Use connections to a SAN Use a storage edge of other FCoE/DCB capable devices
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
82
FCF FCF
VLAN 10,30
Presentation_ID
Cisco Public
83
In vPC enabled topologies in order to ensure correct forwarding behavior for SAN traffic specific design and forwarding rules must be followed While the port-channel is the same on N5K-1 and N5K-2, the FCoE VLANs are different vPC configuration works with Gen-2 FIP enabled CNAs ONLY FCoE VLANs are not carried on the vPC peer-link FCoE and FIP ethertypes are not forwarded over the vPC peer link
N5K1 N5K1
N5K2 N5K2
VLAN 10,20
vPC Peers STP Edge Trunk MCEC for IP Only VLAN 10 VLAN 10,30
84
FEX-2232 extends the reach of 10Gig Ethernet/FCoE to distributed line card (ToR) Support for up to 384 10Gig/FCoE attached hosts managed by a single Nexus 5000 Nexus 5000 is the FCF or can be in FIP Snooping + mode (when supported) Currently Nexus 2232 needs to be single homed to upstream Nexus 5000 (straight through N2K) to ensure SAN A and SAN B isolation Server Ethernet driver connected to the FEX in NIC Teaming (AFT, TLB) or with vPC (802.3ad)
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
SAN B
Nexus 5000 as FCF or as E-NPV device Nexus 5000 Nexus 5000 Fabric Links Option 1: Single Homed Port Channel Nexus 2232 Nexus 2232 10GE FEX 10GE FEX Nexus 5000 Nexus 5000 Fabric Links Option 2: Static Pinned Nexus 2232 Nexus 2232 10GE FEX 10GE FEX
85
LAN Fabric
Fabric A
VE
VF
VN
Please see session BRKSAN-2047 - Storage and the Unified Fabric for more information on FCoE
Cisco Public
86
SAN A
SAN B
FCF FCF
Option 2: Single Homed Unified Wire
FCF FCF
Option 1: Dedicated Links and Topologies
87
VE
VF
VE
VNP
VE
Please see session BRKSAN-2047 - Storage and the Unified Fabric for more information on FCoE
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
88
89
Aggregation/C ore
6100 - EHM
blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
90
FCoE Multi-Tier
Potential Migration for UCS
FCoE design will follow the same evolution Should 6100 become a fabric switch Technically viable but does it add too much operational complexity (e.g. SW certification cycles, operational change management, ) Migration from NPV to E-NPV mode Again key question is where is the demarcation between Unified Wire and Unified Fabric Note: 6100 Fabric HA required due to lack of vPC uplinks when running Unified Wire from 6100 to next hop
CORE SAN A SAN B
FCF FCF
VLAN 10,30
E-NPV E-NPV
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
Presentation_ID
Cisco Public
91
FCoE Multi-Tier
Potential Migration for UCS
Smaller scale What do we collapse to? What is the migration to FCoE targets Again key question is where is the demarc between Unified Wire and Unified Fabric If we direct attach targets do we need to support 6100 as a switch in a multi-switch fabric? Is Option 1 viable only for standalone implementations?
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
CORE
Option 2 ENPV
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
Presentation_ID
Cisco Public
92
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Future?? Questions
Presentation_ID
Cisco Public
93
FCF FCF
E-NPV E-NPV
Presentation_ID
Cisco Public
94
SAN A
SAN B
FCF FCF
VLAN 10,30
E-NPV E-NPV
Presentation_ID
Cisco Public
95
Presentation_ID
Cisco Public
96
Ethernet Cloud
Multi-Hop FCoE
VM VM NEXUS VM 1000v VM VM VM
VM VM NEXUS VM 1000v VM VM VM
VM VM NEXUS VM 1000v VM VM VM
blade1 slot 1 slot 2 blade2 blade3 slot 3 slot 4 blade4 blade5 slot 5 slot 6 blade6 blade7 slot 7 slot 8 blade8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 slot 2 blade2 blade3 slot 3 slot 4 blade4 blade5 slot 5 slot 6 blade6 blade7 slot 7 slot 8 blade8
VM VM NEXUS VM 1000v VM VM VM
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
VM VM NEXUS VM 1000v VM VM VM
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 slot 2 blade2 blade3 slot 3 slot 4 blade4 blade5 slot 5 slot 6 blade6 blade7 slot 7 slot 8 blade8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
Core/Aggregation Layer
Agenda
Why are we here? Background Information DCB Standard FCoE Protocol Information FCoE Building Blocks Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions
Presentation_ID
Cisco Public
99
Dont forget to activate your Cisco Live and Networkers Virtual account for access to all session materials, communities, and on-demand and live activities throughout the year. Activate your account at any internet station or visit www.ciscolivevirtual.com.
Cisco Public
Presentation_ID
100
Visit the Cisco Store in the World of Solutions, where you will be asked to enter this Session ID code
Check the Recommended Reading brochure for suggested products available at the Cisco Store
Presentation_ID
Cisco Public
101
Presentation_ID
Cisco Public
102