You are on page 1of 141

Network Integration of

Server Virtualization
with LAN and Storage
Network Implications
& Best Practices

BRKDCT-2868
Bjrn R. Martinussen

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Housekeeping
 We value your feedback- don't forget to complete your
online session evaluations after each session & complete
the Overall Conference Evaluation which will be available
online from Thursday
 Visit the World of Solutions
 Please remember this is a 'non-smoking' venue!
 Please switch off your mobile phones
 Please make use of the recycling bins provided
 Please remember to wear your badge at all times
including the Party

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Session Objectives
At the end of the session, the participants should
be able to:
 Objective 1: Understand key concepts of server
virtualization architectures as they relate to the network.
 Objective 2: Explain the impact of server virtualization
on DC network design (Ethernet & Fiber Channel)
 Objective 3: Design Cisco DC networks to support
server virtualization environments

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Server Virtualization

Network Implications of Server Virtualization &


Best Practices

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Virtualization

App

App

App

Guest OS

Guest OS

Mofied Stripped
Down OS with
Hypervisor

Hypervisor

Host OS

Guest OS

Modified OS

VMware

2009 Cisco Systems, Inc. All rights reserved.

Microsoft

Cisco Public

App

VM
Modified OS

Mofied Stripped
Down OS with
Hypervisor
CPU

CPU

CPU

BRKDCT-2868

App

VM

VM
Guest OS

App

XEN aka
Paravirtualization

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

VMware Architecture in a Nutshell


Mgmt
Network
App.

App.

Production
Network
VM Kernel
Network

App.
Console
OS

OS

OS

OS

VM Virtualization Layer

Virtual
Machines

Physical Hardware
CPU
m

BRKDCT-2868

em

y
or

2009 Cisco Systems, Inc. All rights reserved.

ESX Server Host


Cisco Public

VMware HA Clustering

App2

App1
App1

App2

App3

App4

Guest OS

Guest OS

Guest OS

Guest OS

Hypervisor

BRKDCT-2868

em

y
or

2009 Cisco Systems, Inc. All rights reserved.

App5

Guest OS

Guest OS

Hypervisor

Hypervisor

ESX Host 1
CPU

Guest OS

ESX Host 3

ESX Host 2
CPU
m

Cisco Public

em

y
or

CPU
m

em

y
or

Application-level HA clustering
(Provided by MSCS, Veritas etc)

App1

App2

App3

App4

Guest OS

Guest OS

Guest OS

Guest OS

Hypervisor

BRKDCT-2868

em

y
or

2009 Cisco Systems, Inc. All rights reserved.

Guest OS

App5

App2

Guest OS

Guest OS

Hypervisor

Hypervisor

ESX Host 1
CPU

App1

ESX Host 3

ESX Host 2
CPU
m

Cisco Public

em

y
or

CPU
m

em

y
or

10

HA + DRS
 HA takes care of Powering on
VMs on available ESX hosts in
the least possible time (regular
migration, not VMotion based)
 DRS takes care of migrating
the VMs over time to the most
appropriate ESX host based
on resource allocation
(VMotion migration)

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

11

Agenda
 VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS

 Vmware SAN Designs


 VMware Virtual Networking

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

12

VMware Networking Components


Per ESX-server configuration

vSwitch VMNICS = uplinks

VMs

vNIC

vSwitch0

VM_LUN_0007
vmnic0

VM_LUN_0005

vNIC

vmnic1
Virtual Ports

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

13

vNIC MAC Address


 VMs MAC address
automatically generated
 Mechanisms to avoid MAC
collision
 VMs MAC address doesnt
change with migration
 VMs MAC addresses can be
made static by modifying the
configuration files
 ethernetN.address =
00:50:56:XX:YY:ZZ

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

 /vmfs/volumes/46b9d79a2de6e23e-929d001b78bb5a2c/VM_LUN_0005
/VM_LUN_0005.vmx
 ethernet0.addressType = "vpx"
 ethernet0.generatedAddress =
"00:50:56:b0:5f:24
 ethernet0.addressType =
static
 ethernet0.address =
"00:50:56:00:00:06

Cisco Public

14

vSwitch Forwarding Characteristics


 Forwarding based on MAC address (No Learning):
If traffic doesnt match a VM MAC is sent out to vmnic
 VM-to-VM traffic stays local
 vSwitches TAG traffic with 802.1q VLAN ID
 vSwitches are 802.1q Capable
 vSwitches can create Etherchannels

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

15

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

16

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

17

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

18

VM  Port-Group vSwitch

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

19

VLANs - External Switch Tagging - EST


VM1

VLAN tagging and


stripping is done by the
physical switch

Service
Console

VM2

Virtual NICs

VMkernel
NIC

VSwitch B

VSwitch A

VMkernel

ESX
Server

No ESX configuration
required as the server is
not tagging
The number of VLANs
supported is limited to
the number of physical
NICs in the server

Physical NICs
Physical
Switches
BRKDCT-2868

VLAN 100

2009 Cisco Systems, Inc. All rights reserved.

VLAN 200

Cisco Public

20

VLANs - Virtual Switch Tagging - VST


VM1

The vSwitch tags


outgooing frames with
the VLAN Id

Service
Console

VM2

Virtual NICs

VMkernel
NIC

VSwitch A

VMkernel

Physical NICs
Physical
Switches
BRKDCT-2868

ESX
Server

The vSwitch strips any


dot1Q tags before
delivering to the VM
Physical NICs and
switch port operate as a
trunk
Number of VLANs are
limited to the number of
vNICs

dot1Q

VLAN 100

2009 Cisco Systems, Inc. All rights reserved.

VLAN 200

Cisco Public

No VTP or DTP. All


static config. Prune
VLANs so ESX doesnt
process broadcasts
21

VLANs - Virtual Guest Tagging - VGT


VM1

Service
Console

VM2
dot1Q
VM applied

Virtual NICs

VMkernel
NIC

VSwitch A

VMkernel

Physical NICs
Physical
Switches
BRKDCT-2868

Portgroup VLAN Id set to


4095

2009 Cisco Systems, Inc. All rights reserved.

Guest can send/receive


any tagged VLAN frame
Number of VLANs per
guest are not limited to
the number of VNICs
VMware does not ship
with the driver:

dot1Q

VLAN 100

ESX
Server

Tagging and stripping of


VLAN ids happens in the
guest VM requires an
802.1Q driver

VLAN 200

Windows E1000
Linux dot1q module

Cisco Public

22

Agenda
 VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

23

Meaning of NIC Teaming in VMware (1)


ESX server NIC cards
vSwitch Uplinks
vmnic1

vmnic0

vmnic2

vmnic3
NIC Teaming

NIC Teaming

THIS IS NOT NIC Teaming


vNIC

vNIC

vNIC

vNIC

vNIC

ESX Server Host

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

24

Meaning of NIC Teaming in VMware (2)

This is NOT Teaming

Teaming is Configured at
The vmnic Level

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

25

Design Example
2 NICs, VLAN 1 and 2, Active/Standby

Port-Group 1
VLAN 2

802.1q
Vlan 1,2

802.1q
Vlan 1,2

vmnic1

vmnic0

ESX Server

vSwitch0
Port-Group 2
VLAN 1

VM1

BRKDCT-2868

VM2

Service Console

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

26

Active/Standby per-Port-Group
CBS-right

CBS-left

VMNIC1

VMNIC0

Port-Group2

Port-Group1

vSwitch0

VM5

.5
BRKDCT-2868

VM7

.7

2009 Cisco Systems, Inc. All rights reserved.

ESX Server
Cisco Public

VM4

VM6

.4

.6
27

Port-Group Overrides vSwitch Global


Configuration

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

28

Active/Active

ESX server NIC cards

ESX server
vmnic0

vmnic1

vSwitch
Port-Group

VM1

BRKDCT-2868

VM2

VM3

VM4

2009 Cisco Systems, Inc. All rights reserved.

VM5

Cisco Public

29

Active/Active
IP-Based Load Balancing
 Works with Channel-Group
mode ON
 LACP is not supported
(see below):

Port-channeling

9w0d: %LINK-3-UPDOWN: Interface


GigabitEthernet1/0/14, changed state
to up

ESX server
vmnic0

9w0d: %LINK-3-UPDOWN: Interface


GigabitEthernet1/0/13, changed state
to up

vmnic1

vSwitch
Port-Group

9w0d: %EC-5-L3DONTBNDL2:
Gi1/0/14 suspended: LACP currently
not enabled on the remote port.
9w0d: %EC-5-L3DONTBNDL2:
Gi1/0/13 suspended: LACP currently
not enabled on the remote port.
VM1

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

VM2

VM3

VM4

30

Agenda
 VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

31

All Links Active, No Spanning-Tree


Is There a Loop?
CBS-right

CBS-left

NIC3

NIC2

NIC1

NIC4

Port-Group1

Port-Group2

vSwitch1

VM5

.5
BRKDCT-2868

VM7

.7

2009 Cisco Systems, Inc. All rights reserved.

ESX Server
Cisco Public

VM4

VM6

.4

.6
32

Broadcast/Multicast/Unknown Unicast
Forwarding in Active/Active (1)

802.1q
Vlan 1,2

vmnic0

802.1q
Vlan 1,2

vmnic1

vSwitch0
Port-Group 1
VLAN 2

ESX Server

BRKDCT-2868

VM1

VM2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

33

Broadcast/Multicast/Unknown Unicast
Forwarding in Active/Active (2)

802.1q
Vlan 1,2

ESX Host

NIC1

802.1q
Vlan 1,2

NIC2

vSwitch

VM1

BRKDCT-2868

VM2

2009 Cisco Systems, Inc. All rights reserved.

VM3

Cisco Public

34

Can the vSwitch Pass Traffic Through?

E.g. HSRP?

NIC1

NIC2

vSwitch

VM1

BRKDCT-2868

VM2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

35

Is This Design Possible?


Catalyst1

Catalyst2

802.1q
802.1q

ESX server1
VMNIC1

VMNIC2

vSwitch

VM5

.5
BRKDCT-2868

VM7

.7

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

36

vSwitch Security
 Promiscuous mode Reject
prevents a port from
capturing traffic whose
address is not the VMs
address
 MAC Address Change,
prevents the VM from
modifying the vNIC
address
 Forget Transmits prevents
the VM from sending out
traffic with a different MAC
(e.g NLB)

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

37

vSwitch vs LAN Switch


 Similarly to a LAN Switch:
Forwarding based on MAC
address
VM-to-VM traffic stays local
Vswitches TAG traffic with
802.1q VLAN ID
vSwitches are 802.1q Capable
vSwitches can create
Etherchannels
Preemption Configuration
(similar to Flexlinks, but no delay
preemption)

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

 Differently from a LAN Switch


No Learning
No Spanning-Tree protocol
No Dynamic trunk negotiation
(DTP)
No 802.3ad LACP
2 Etherchannel backing up each
other is not possible
No SPAN/mirroring capabilities:
Traffic capturing is not the
equivalent of SPAN
Port Security limited

38

Agenda
 VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

39

vSwitch and NIC Teaming Best Practices


 Q: Should I use multiple vSwitches
or multiple Port-Groups to isolate
traffic?
 A: We didnt see any advantage in
using multiple vSwitches, multiple
Port-Groups with different VLANs
give you enough flexibility to isolate
servers

 Q: Which NIC Teaming


configuration should I use?
 A: Active/Active, Virtual Port-ID
based
 Q: Do I have to attach all NICs in
the team to the same switch or to
different switches?

 Q: Should I use EST or VST?

 A: with Active/Active Virtual Port-ID


based, it doesnt matter

 A: Always use VST, i.e. assign the


VLAN from the vSwitch

 Q: Should I use Beaconing?

 Q: Can I use native VLAN for VMs?


 A: Yes you can, but to make it
simple dont. If you do, do not TAG
VMs with the native VLAN

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

 A: No
 Q: Should I use Rolling Failover (i.e.
no preemption)
 A: No, default is good, just enable
trunkfast on the Cisco switch
40

Cisco Switchport Configuration


 Make it a Trunk

 interface GigabitEthernetX/X

 Enable Trunkfast

 description <<** VM Port **>>

 Can the Native VLAN be used for


VMs?

 no ip address

 Yes, but IF you do, you have 2


options

 switchport
 switchport trunk encapsulation dot1q

Configure VLAN ID = 0 for the VMs


that are going to use the native VLAN
(preferred)

 switchport trunk native vlan <id>

Configure vlan dot1q tag native on


the 6k (not recommended)

 switchport mode trunk

 switchport trunk allowed vlan xx,yy-zz

 Do not enable Port Security


(see next slide)

 switchport nonegotiate

 Make sure that teamed NICs are in


the same Layer 2 domain

 spanning-tree portfast trunk

 no cdp enable

 Provide a Redundant Layer 2 path

 !

Typically: SC, VMKernel, VM Production


BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

41

Configuration with 2 NIC


SC, VMKernel, Production Share NICs

Trunks
802.1q

NIC teaming
Active/Active

ESX Server
VMNIC2

VMNIC1
Port-Group
1

802.1q: Production VLANs,


Service Console, VM Kernel

Port-Group Port-Group
3
2

vSwitch 0
Global
Active/Active
VST
VM1
HBA1

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Service
Console

VM2

VM Kernel

HBA2

Active/Standby
Vmnic1/vmnic2

Active/Standby
Vmnic2/vmnic1

42

Configuration with 2 NICs


Dedicated NIC to SC, VMKernel, Separate NIC for Production

Trunks
802.1q

NIC teaming
Active/Active

ESX Server
VMNIC2

VMNIC1
Port-Group
1

802.1q: Production VLANs,


Service Console, VM Kernel

Port-Group Port-Group
3
2

vSwitch 0
Global
Active/Standby
Vmnic1/vmnic2
VST
VM1
HBA1

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Service
Console

VM2

VM Kernel

HBA2

Active/Standby
Vmnic2/vmnic1

Active/Standby
Vmnic2/vmnic1

43

Network Attachment (1)


Secondary
root
Rapid PVST+

root

802.1q:
Production,
SC, VMKernel

Trunkfast
BPDU guard

No Blocked Port,
No Loop

Catalyst1

Catalyst2

802.1q:
Production,
SC, VMKernel

802.1q

1
VMNIC1

ESX server1

BRKDCT-2868

2
VMNIC2

vSwitch

2009 Cisco Systems, Inc. All rights reserved.

VMNIC1

vSwitch

Cisco Public

All NICs are used


Traffic distributed
On all links

VMNIC2

ESX server 2

44

Network Attachment (2)


802.1q:
Production, SC, VMKernel

root

Secondary
root
Rapid PVST+

Trunkfast
BPDU guard
Typical Spanning-Tree
V-Shape Topology

802.1q:
Production,
SC, VMKernel

802.1q

1
VMNIC1

2
VMNIC2

vSwitch
2009 Cisco Systems, Inc. All rights reserved.

VMNIC1

VMNIC2

ESX server 2 vSwitch

ESX server1
BRKDCT-2868

All NICs are used


Traffic distributed
On all links

Cisco Public

45

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

46

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

47

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

48

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

49

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

50

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

51

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

52

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

53

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

54

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

55

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

56

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

57

Network Attachment (1)


Secondary
root

root

802.1q:
Production,
SC, VMKernel

Trunkfast
BPDU guard

No Blocked Port,
No Loop

Catalyst1

Catalyst2
802.1q:
SC and VMKernel

802.1q:
Production

6
4

ESX server1

2009 Cisco Systems, Inc. All rights reserved.

5
ESX server 2

vSwitch
BRKDCT-2868

Rapid PVST+

vSwitch
Cisco Public

58

Network Attachment (2)


802.1q:
Production, SC, VMKernel

root

Secondary
root
Rapid PVST+

Trunkfast
BPDU guard
Typical Spanning-Tree
V-Shape Topology

Catalyst2

Catalyst1

802.1q:
Production

802.1q:
SC and VMKernel

BRKDCT-2868

6
4

ESX server1

ESX server 2

vSwitch

vSwitch

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

59

How About?
802.1q:
Production, SC, VMKernel

root

Secondary
root

Trunkfast
BPDU guard
Typical Spanning-Tree
V-Shape Topology

Catalyst2

Catalyst1

802.1q:
Production

802.1q:
SC and VMKernel

1
ESX server1

2009 Cisco Systems, Inc. All rights reserved.

8
ESX server 2
vSwitch

vSwitch

BRKDCT-2868

6
4

Cisco Public

60

4 NICs with Etherchannel


Clustered switches

802.1q:
Production

7
4

vSwitch

2009 Cisco Systems, Inc. All rights reserved.

vSwitch

ESX server1

BRKDCT-2868

802.1q:
SC, VMKernel

Cisco Public

ESX server 2

61

VMotion Migration Requirements

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

62

VMKernel Network can be routed


VM Kernel
Network
Mgmt VM Kernel Production
Network Network
Network

Virtual
Machines

ESX Server Host


BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

63

VMotion L2 Design

Rack1

Rack10

vmnic0
vmnic0

vSwitch0

vmkernel

BRKDCT-2868

vSwitch0

vSwitch2

vSwitch1

2009 Cisco Systems, Inc. All rights reserved.

vSwitch2

vmkernel

Service
console

ESX Host 2

vmnic2

vmnic2 vmnic3

vmnic1

VM4

Cisco Public

VM5

VM6

ESX Host 1

64

HA clustering (1)
 EMC/Legato AAM based

 Recommendations:

 HA Agent runs in every host

Have 2 Service Console on


redundant paths

 Heartbeats Unicast UDP port


~8042 (4 UDP ports opened)

Avoid losing SAN access (e.g. via


iSCSI)

 Hearbeats run on the Service


Console ONLY

Make sure you know before hand


if DRS is activated too!

 When a Failure Occurs, the ESX


Host pings the gateway (on the
SERVICE CONSOLE ONLY) to
verify Network Connectivity
 If ESX Host is isolated, it shuts
down the VMs thus releaseing
locks on the SAN

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

 Caveats:
Losing Production VLAN
connectivity only, ISOLATES VMs
(theres no equivalent of uplink
tracking on the vswitch)

 Solution:
NIC TEAMING

65

HA clustering (2)

iSCSI access/VMkernel

10.0.200.0

COS 10.0.2.0
Prod 10.0.100.0

vmnic0

vmnic0

VM1

VM2
VM1

ESX2 Server Host

ESX1 Server Host


BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

VM2

Cisco Public

66

Agenda
 VMware LAN Networking
 VMware SAN Designs
Storage Fundamentals
Storage Protocols

 VMware Virtual Networking

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

67

Multiple ESX ServersShared Storage

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

68

VMFS
VMFS Is High Performance Cluster File System
for Virtual Machines
 Stores the entire virtual machine
state in a central location
Virtual Machines

 Supports heterogeneous storage


arrays

ESX
Server

ESX
Server

ESX
Server

ESX
Server

VMFS

VMFS

VMFS

VMFS

 Adds more storage to a VMFS


volume dynamically

Servers

 Allows multiple ESX Servers to


access the same virtual machine
storage concurrently
Storage

 Enable virtualization-based
distributed infrastructure services
such as VMotion, DRS, HA

A.vmdk

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

69

Three Layers of the Storage Stack


Virtual
disks
(VMDK)
Virtual Machine
Datastores
VMFS Vols
(LUNs)

Physical
disks

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

ESX Server

Storage Array

Cisco Public

70

ESX Server View of SAN


 FibreChannel disk arrays appear as SCSI targets
(devices) which may have one or more LUNs
 On boot, ESX Server scans for all LUNs by sending
inquiry command to each possible target/LUN number
 Rescan command causes ESX Server to scan again,
looking for added or removed targets/LUNs
 ESX Server can send normal SCSI commands to any
LUN, just like a local disk

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

71

ESX Server View of SAN (Cont.)


 Built-in locking mechanism to ensure multiple hosts can
access same disk on SAN safely
VMFS-2 and VMFS-3 are distributed file systems, do
appropriate on-disk locking to allow many ESX Server servers
to access same VMFS

 Storage is a resource that must be monitored and


managed to ensure performance of VMs
Leverage 3rd-party systems and storage management tools
Use VirtualCenter to monitor storage performance from virtual
infrastructure point of view

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

72

Choices in Protocol
 FC, iSCSI or NAS?
Best practice to leverage the existing infrastructure
Not to introduce too many changes all at once
Virtual environments can leverage all types
You can choose what fits best and even mix them
Common industry perceptions and trade offs still apply in the
virtual world
What works well for one does not work for all

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

73

Which Protocol to Choose?


 Leverage the existing infrastructure when possible
 Consider customer expertise and ability to learn
 Consider the costs (Dollars and Performance)
 What does the environment need in terms of throughput
Size for aggregate throughput before capacity

 What functionality is really needed for Virtual Machines


Vmotion, HA, DRS (works on both NAS and SAN)
VMware Consolidated Backup (VCB)
ESX boot from disk
Future scalability
DR requirements

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

74

FC SANConsiderations
 Leverage multiple paths for high availability
 Manually distribute I/O intensive VMs on
separate paths
 Block access provides optimal performance for large
high transactional throughput work loads
 Considered the industrial strength backbone for most
large enterprise environments
 Requires expertise in storage management team
 Expensive price per port connectivity
 Increasing to 10 Gb throughput (Soon)
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

75

iSCSIConsiderations
 Uses standard NAS infrastructure
Best Practice to
Have dedicated LAN/VLAN to isolate from other network
traffic
Use GbE or faster network
Use multiple NICs or iSCSI HBAs
Use iSCSI HBA for performance environments
Use SW initiator for cost sensitive environments

 Supports all VI 3 features


Vmotion, DRS, HA
ESX boot from HW initiator only
VCB is in experimental support today full support shortly
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

76

NFSConsiderations
 Has more protocol overhead but less FS overhead than VMFS as
the NAS FS lives on the NAS Head
 Simple to define in ESX by providing
Configure NFS server hostname or IP
NFS share
ESX Local datastore name

 No tuning required for ESX as most are already defined


No options for rsize or wsize
Version is v3,
Protocol is TCP

 Max mount points = 8 by default


Can be increase to hard limit of 32

 Supports almost all VI3 features except VCB


BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

77

Summary of Features Supported


Protocol

Vmotion,

VCB

DRS & HA

ESX boot
from disk

FC SAN
Yes

Yes

Yes

Yes

Soon

Yes

Yes

Soon

No

Yes

No

No

iSCSI SAN
HW init
iSCSI SAN
SW init
NFS

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

78

Choosing Disk Technologies


 Traditional performance factors
Capacity / Price
Disk types (SCSI, FC, SATA/SAS)
Access Time; IOPS; Sustained Transfer Rate
Drive RPM to reduce rotational latency
Seek time
Reliability (MTBF)

 VM performance gated ultimately by IOPS density and


storage space
 IOPS Density -> Number of read IOPS/GB
Higher = better
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

79

The Choices One Needs to Consider


 FS vs. Raw
VMFS vs. RDM (when to use)

 NFS vs. Block


NAS vs. SAN (why use each)

 iSCSI vs. FC
What is the trade off?

 Boot from SAN


Some times needed for diskless servers

 Recommended Size of LUN


it depends on application needs

 File system vs. LUN snapshots (host or array vs. Vmware VMFS
snapshots) which to pick?
 Scalability (factors to consider)
# hosts, dynamic adding of capacity, practical vs. physical limits
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

80

Trade Offs to Consider


 Ease of provisioning
 Ease of on-going management
 Performance optimization
 Scalability Head room to grow
 Function of 3rd Party services
Remote Mirroring
Backups
Enterprise Systems Management

 Skill level of administration team


 How many shared vs. isolated storage resources
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

81

Isolate vs. Consolidate


Storage Resources
 RDMs map a single LUN to one VM
 One can also dedicate a single VMFS Volume
to one VM
 When comparing VMFS to RDMs both the above
configurations are what should be compared
 The bigger question is how many VM can share a
single VMFS Volume without contention causing pain
 The answer is that it depends on many variables
Number of VMs and their workload type
Number of ESX servers those VM are spread across
Number of concurrent request to the same disk sector/platter
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

82

Isolate vs. Consolidate

 Poor utilization
 Islands of allocations
 More management
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

 Increased utilization
 Easier provisioning
 Less management
Cisco Public

83

Where Have You Heard This Before


 Remember the DAS  SAN migration
 Convergence of LAN and NAS
 All the same concerns have been raised before
What if the work load of some cause problems for all?
How will we know who is taking the lions share of resource?
What if it does not work out?

Our Biggest Obstacle Is Conventional Wisdom!


The Earth Is Flat!
If Man Were Meant to fly He Would Have Wings

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

84

VMFS vs. RDMRDM Advantages


 Virtual machine partitions are stored in the native
guest OS file system format, facilitating layered
applications that need this level of access
 As there is only one virtual machine on a LUN, you
have much finer grain characterization of the LUN,
and no I/O or SCSI reservation lock contention.
The LUN can be designed for optimal performance
 With Virtual Compatibility mode, virtual machines
have many of the features of being on a VMFS, such
as file locking to allow multiple access, and snapshots

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

85

VMFS vs. RDMRDM Advantages


 With Physical Compatibility mode, it gives a virtual
machine the capability of sending almost all lowlevel SCSI commands to the target device, including
command and control to a storage controller, such as
through SAN Management agents in the virtual
machine.
 Dynamic Name Resolution: Stores unique information
about LUN regardless of changes to physical address
changes due to hardware or path changes

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

86

VMFS vs. RDMRDM Disadvantages


 Not available for block or RAID devices that do not
report a SCSI serial number
 No snapshots in Physical Compatibility mode, only
available in Virtual Compatibility mode
 Can be very inefficient, in that, unlike VMFS, you can
only have one VM access a RDM

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

87

RDMs and Replication


 RDMs mapped RAW LUNs can be replicated to the
Remote Site
 RDMs reference the RAW LUNs via
the LUN number
LUN ID

 VMFS3 Volumes on Remote site will have unusable


RDM configuration if either properties change
 Remove the old RDMs and recreate them
Must correlate RDM entries to correct RAW LUNs
Use the same RDM file name as old one to avoid editing the
vmx file
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

88

StorageType of Access
 RAW

 VMFS

 RAW may give better


performance

 Leverage templates and


quick provisioning

 RAW means more LUNs

 Fewer LUNs means you


dont have to watch Heap

More provisioning time

 Advanced features still


work

 Scales better with


Consolidated Backup
 Preferred Method

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

89

StorageHow Big Can I Go?


 One Big Volume or Individual?
Will you be doing replication?
More granular slices will help
High performance applications?
Individual volumes could help
With Virtual Infrastructure 3
VMDK, swap, config files, log files, and snapshots all live
on VMFS

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

90

What Is iSCSI?
 A SCSI transport protocol, enabling access to storage
devices over standard TCP/IP networks
Maps SCSI block-oriented storage over TCP/IP
Similar to mapping SCSI over Fibre Channel

 Initiators, such as an iSCSI HBA in an ESX Server,


send SCSI commands to targets, located in iSCSI
storage systems
Block storage
IP

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

91

VMware iSCSI Overview


 VMware added iSCSI as a supported option in VI3
Block-level I/O over TCP/IP using SCSI-3 protocol
Supporting both Hardware and Software Initiators
GigE NiCs MUST be used for SW Initiators (no 100Mb NICs)
Support iSCSI HBAs (HW init) and NICs for SW only today
Check the HCL for supported HW Initiators and SW NICs

 What we do not support in ESX 3.0.1


10 gigE
Jumbo Frames
Multi Connect Session (MCS)
TCP-Offload Engine (TOE) Cards
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

92

VMware ESX Storage Options


FC

VM

FC

VM

iSCSI/NFS

VM

FC

VM

DAS

VM

VM

SCSI

 80%+ of install base


uses FC storage
 iSCSI is popular in SMB
market
 DAS is not popular
because it prohibits
VMotion

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

93

Virtual Servers Share a Physical HBA

HW

Hypervisor

Virtual
Servers

 A zone includes the physical hba and


the storage array
 Access control is demanded to storage
array LUN masking and mapping, it is
based on the physical HBA pWWN and
it is the same for all VMs
 The hypervisor is in charge of the
mapping, errors may be disastrous
MDS9000
Mapping

pWWN-P

Zone
BRKDCT-2868

Storage Array
(LUN Mapping and Masking)

FC

FC

pWWN-P

Single Login on a Single Point-to-Point Connection


2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

FC Name Server
94

NPIV Usage Examples


Intelligent Pass-thru

Virtual Machine Aggregation

FC

FC

FC

FC

FC

FC

FC

Switch becomes an HBA


concentrator

FC

FC

NP_Port

NPIV enabled HBA


F_Port

F_Port

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

95

Raw Device Mapping


 RDM allows direct
read/write access
to disk

VM1

 Block mapping is still


maintained within a
VMFS file
 Rarely used but
important for clustering
(MSCS supported)

FC

VM2

FC

RDM

Mapping

FC
VMFS

 Used with NPIV


environments

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

96

Storage Multi-Pathing
 No storage load balancing, strictly failover
 Two modes of operation dictate behavior
(Fixed and Most Recent)
 Fixed Mode

VM

VM

Allows definition of preferred paths


If preferred path fails a secondary path is used
If preferred path reappears it will fail back

FC

FC

 Most Recently Used


If current path fails a secondary path is used
If previous path reappears the current path is still used

 Supports both Active/Active and Active/Passive


arrays
 Auto detects multiple paths

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

97

Q and A

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

98

Recommended Reading

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

99

Agenda
 VMware LAN Networking
 VMware SAN Designs
 VMware Virtual Networking

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

100

Server Virtualization

Cisco Nexus 1000v Virtual Switch

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

101

VI 3.5 Network Configuration

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

102

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

103

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

104

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

105

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

106

Nexus 1000V Virtual Chassis Model


 One Virtual Supervisor Module managing
multiple Virtual Ethernet Modules
Dual Supervisors to support HA environments

 A single Nexus 1000V can span multiple ESX


Clusters

SVS-CP# show module


Mod Ports Module-Type
--- ----- --------------------------------1
1
Supervisor Module
2
1
Supervisor Module
3
48
Virtual Ethernet Module
4
48
Virtual Ethernet Module

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Model
Status
------------------ ---------Cisco Nexus 1000V
Cisco Nexus 1000V

active *
standby
ok
ok

107

Single Chassis Management


 A single switch from control plane and management plane perspective
Protocols such as CDP operates as a single switch
XML API and SNMP management appears as a single virtual
chassis
Upstream-4948-1#show cdp neighbor
Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone
Device ID
Platform

Holdtme

Capability

Gig 1/5

136

Nexus

Gig 1/10

136

Nexus

Gig 1/12

136

Nexus

Port ID

N1KV-Rack10
1000V
Eth2/2
N1KV-Rack10
1000V
Eth3/5
N1KV-Rack10
1000V
Eth21/2

BRKDCT-2868

Local Intrfce

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

108

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

109

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

110

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

111

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

112

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

113

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

114

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

115

Deploying the Cisco Nexus 1000V


Collaborative Deployment Model
Server N

1.

VMW Virtual Center &


Cisco Nexus 1000V
relationship established

2.

Network Admin configures


Nexus 1000V to support
new ESX hosts

3.

4.

Nexus 1000VNexus
DVS1000V
Nexus
1000V -VEM
-VEM
Nexus 1000V
1000V -VEM
-VEM
Nexus
VMW ESX
ESX
VMW

Server Admin plugs new


ESX host into network &
adds host to Cisco DVS
with Virtual Center
Repeat step 3 to add
another host & extend
DVS configuration

2009 Cisco Systems, Inc. All rights reserved.

VMW ESX
ESX
VMW

4.
Nexus 1000V
1000V
Nexus

Virtual Center

BRKDCT-2868

Server 1

Cisco Public

VSM
VSM

116

Introduction to Port Profiles


 Port Profiles are a collection interface commands
switchport mode access
switchport access vlan 57
no shutdown
 Applied at the interface level using to either physical or virtual
interfaces
 Dynamic configuration
Port Profile changes are propagated immediately to all ports
using that profile
 Interfaces can be configured manually in conjunction with a profile

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

117

Port Profiles Propagation


 Port profiles are pushed via the Virtual Center API
 Upon connection/reconnection with Virtual Center the
VSM re-verifies the correct port profile configuration
exists within Virtual Center
 Port profile state and type must be set for
propagation to occur
N1K-CP(config-port-prof) state enable
N1K-CP(config-port-prof) vmware port-group (optional name)

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

118

Policy Based VM Connectivity


What can a policy do?
Policy definition supports:
VLAN, PVLAN settings
ACL, Port Security, ACL
Redirect

Server
VM
VM
#1
#1

VM
VM
#2
#2

VM
VM
#3
#3

VM
VM
#4
#4

Nexus 1000 DVS

NetFlow Collection

VMW ESX
ESX
VMW

Rate Limiting
QoS Marking (COS/DSCP)
Remote Port Mirror (ERSPAN)

Virtual Center

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Nexus 1000

119

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

120

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

121

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

122

VMware Administrator View


 Consistent Workflow:
Continue to select Port
Groups when configuring
a VM in VMware Virtual
Infrastructure Client

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

123

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

124

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

125

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

126

Mobility of Security & Network Properties


1.

2.

3.

Virtual Center kicks off


a Vmotion
(manual/DRS) &
notifies Nexus 1000V
During VM replication,
Nexus 1000V copies
VM port state to new
host

VM
VM
#1
#1

VM
VM
#2
#2

VM
VM
#4
#4

VM
VM
#5
#5

VM
VM
#6
#6

VM
VM
#7
#7

VM
VM
#8
#8

Nexus 1000V
1000V DVS
DVS
Nexus
Nexus 1000 -VEM
Nexus 1000 -VEM

VMW ESX
ESX
VMW

Once VMotion
completes, port on new
ESX host is brought up
& VMs MAC address
is announced to the
network

2009 Cisco Systems, Inc. All rights reserved.

VM
VM
#3
#3

Nexus 1000V
1000V -VEM
-VEM
Nexus

VMW ESX
ESX
VMW

3.
Nexus 1000V
1000V
Nexus

Network Update
Virtual Center

BRKDCT-2868

Server 2

Server 1

Cisco Public

ARP for VM1 sent to


network
Flows to VM1 MAC
redirected to Server 2

VSM
VSM

127

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

128

Nexus 1000V Deployment Scenarios


Rack Optimized
Servers

1. Works with all types of


servers (rack
optimized, blade
servers, etc.)

Blade Servers

2. Works with any type of


upstream switch
(Blade,
Top or Rack, Modular)
3. Works at any speed
(1G or 10G)
4. Nexus 1000V VSM can
be deployed as a VM or
a physical appliance

Nexus 1000V

Virtual Center

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

VSM

129

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

130

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

131

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

132

Virtual Supervisor Options


Server 1
VM
#1

VM
#2

VM
#3

Server 2
VM
#4

VM
#5

VM
#6

VM
#7

Server 3
VM
#8

VM
#9

VM
#10

VM
#11

VEM

VEM

VEM

VMW ESX

VMW ESX

VMW ESX

VM
#12

VSM
VSM

VSM

VSM Virtual Appliance





ESX Virtual Appliance


Special dependence on CPVA
server

Supports up to 64 VEMs

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

VSM Physical Appliance

Cisco Public




Cisco branded x86 server


Runs multiple instances of the
VSM virtual appliance

Each VSM managed


independently

133

Distributed Switching
 Each Virtual Ethernet Module behaves like an independent switch
No address learning/synchronization across VEMs
No concept of Crossbar/Fabric between the VEMs
Virtual Supervisor is NOT in the data path
No concept of forwarding from an ingress linecard to an egress
linecard (another server)
No Etherchannel across VEMs

Nexus 1000V

VSM

BRKDCT-2868

VEM

VEM

VEM

VMW ESX

VMW ESX

VMW ESX

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

134

Switching Interface Types


 Physical Ethernet Ports
NIC cards on each server
Appears as Eth interface on a specific module in NX-OS
Example Eth10/7
Static assignment as long as the module ID does not change
Up to 32 per host

 Virtual Ethernet Ports


Virtual Machine facing ports
Appears as Veth within NX-OS.
Not assigned to a specific module to simplify VMotion

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

135

Tracing Virtual Ethernet Ports


show interface VEthernet
Vethernet2 is up
Hardware is Virtual, address is 0050.5675.26c5
Owner is VMware VMkernel, adapter is vmk0
Active on module 8, host tc-esx05.cisco.com
VMware DVS port 16777215
Port-Profile is Vmotion
Port mode is access
Rx
444385 Input Packets 444384 Unicast Packets
0 Multicast Packets 1 Broadcast Packets
572675241 Bytes
Tx
687655 Output Packets 687654 Unicast Packets
0 Multicast Packets 1 Broadcast Packets 1 Flood Packets
592295257 Bytes
0 Input Packet Drops 0 Output Packet Drops

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

136

Manageability and Scalability Details


 RBAC

 Cisco Nexus 1000V Virtual


Supervisor Module: Virtual
appliance in VMDK or ISO
image, supports up to 64
VMware ESX or ESXi

 Wireshark
 ERSPAN
 LLDP, CDP

 Cisco Nexus 1000V Virtual


Ethernet Module: maximum
256 ports

 EEM
 Rollback

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

137

Server Virtualization Key Takeaways


What you should takeaway from this session:
 Ability to explain the key concepts of server
virtualization and know the key players in the market.
 Ability to explain to customers key network design
criteria which must be considered when deploying
server virtualization
 Ability to recommend network and storage best
practices associated with deploying server virtualization
technologies.

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

138

Meet The Expert


To make the most of your time at Cisco Networkers 2009,
schedule a Face-to-Face Meeting with a top Cisco Expert.
Designed to provide a "big picture" perspective as well as
"in-depth" technology discussions, these face-to-face
meetings will provide fascinating dialogue and a wealth of
valuable insights and ideas.
Visit the Meeting Centre reception desk located in the
Meeting Centre in World of Solutions

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

139

Whitepapers and blogs


 http://www.cisco.com/go/nexus1000v/
 http://www.cisco.com/en/US/products/ps9670/prod_whi
te_papers_list.html
 The Role of 10 Gigabit Ethernet in Virtualized Server
Environments
 http://blogs.vmware.com/networking/
 http://www.cisco.com/go/datacenter/ look for VMware
Infrastructure 3 in a Cisco Network Environment
 http://www.cisco.com/en/US/docs/solutions/Enterprise/
Data_Center/vmware/vmware.html
BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

140

BRKDCT-2868

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

141

You might also like