Professional Documents
Culture Documents
Server Virtualization
with LAN and Storage
Network Implications
& Best Practices
BRKDCT-2868
Bjrn R. Martinussen
BRKDCT-2868
Cisco Public
Housekeeping
We value your feedback- don't forget to complete your
online session evaluations after each session & complete
the Overall Conference Evaluation which will be available
online from Thursday
Visit the World of Solutions
Please remember this is a 'non-smoking' venue!
Please switch off your mobile phones
Please make use of the recycling bins provided
Please remember to wear your badge at all times
including the Party
BRKDCT-2868
Cisco Public
Session Objectives
At the end of the session, the participants should
be able to:
Objective 1: Understand key concepts of server
virtualization architectures as they relate to the network.
Objective 2: Explain the impact of server virtualization
on DC network design (Ethernet & Fiber Channel)
Objective 3: Design Cisco DC networks to support
server virtualization environments
BRKDCT-2868
Cisco Public
Server Virtualization
BRKDCT-2868
Cisco Public
Virtualization
App
App
App
Guest OS
Guest OS
Mofied Stripped
Down OS with
Hypervisor
Hypervisor
Host OS
Guest OS
Modified OS
VMware
Microsoft
Cisco Public
App
VM
Modified OS
Mofied Stripped
Down OS with
Hypervisor
CPU
CPU
CPU
BRKDCT-2868
App
VM
VM
Guest OS
App
XEN aka
Paravirtualization
BRKDCT-2868
Cisco Public
BRKDCT-2868
Cisco Public
App.
Production
Network
VM Kernel
Network
App.
Console
OS
OS
OS
OS
VM Virtualization Layer
Virtual
Machines
Physical Hardware
CPU
m
BRKDCT-2868
em
y
or
VMware HA Clustering
App2
App1
App1
App2
App3
App4
Guest OS
Guest OS
Guest OS
Guest OS
Hypervisor
BRKDCT-2868
em
y
or
App5
Guest OS
Guest OS
Hypervisor
Hypervisor
ESX Host 1
CPU
Guest OS
ESX Host 3
ESX Host 2
CPU
m
Cisco Public
em
y
or
CPU
m
em
y
or
Application-level HA clustering
(Provided by MSCS, Veritas etc)
App1
App2
App3
App4
Guest OS
Guest OS
Guest OS
Guest OS
Hypervisor
BRKDCT-2868
em
y
or
Guest OS
App5
App2
Guest OS
Guest OS
Hypervisor
Hypervisor
ESX Host 1
CPU
App1
ESX Host 3
ESX Host 2
CPU
m
Cisco Public
em
y
or
CPU
m
em
y
or
10
HA + DRS
HA takes care of Powering on
VMs on available ESX hosts in
the least possible time (regular
migration, not VMotion based)
DRS takes care of migrating
the VMs over time to the most
appropriate ESX host based
on resource allocation
(VMotion migration)
BRKDCT-2868
Cisco Public
11
Agenda
VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
BRKDCT-2868
Cisco Public
12
VMs
vNIC
vSwitch0
VM_LUN_0007
vmnic0
VM_LUN_0005
vNIC
vmnic1
Virtual Ports
BRKDCT-2868
Cisco Public
13
BRKDCT-2868
/vmfs/volumes/46b9d79a2de6e23e-929d001b78bb5a2c/VM_LUN_0005
/VM_LUN_0005.vmx
ethernet0.addressType = "vpx"
ethernet0.generatedAddress =
"00:50:56:b0:5f:24
ethernet0.addressType =
static
ethernet0.address =
"00:50:56:00:00:06
Cisco Public
14
BRKDCT-2868
Cisco Public
15
BRKDCT-2868
Cisco Public
16
BRKDCT-2868
Cisco Public
17
BRKDCT-2868
Cisco Public
18
VM Port-Group vSwitch
BRKDCT-2868
Cisco Public
19
Service
Console
VM2
Virtual NICs
VMkernel
NIC
VSwitch B
VSwitch A
VMkernel
ESX
Server
No ESX configuration
required as the server is
not tagging
The number of VLANs
supported is limited to
the number of physical
NICs in the server
Physical NICs
Physical
Switches
BRKDCT-2868
VLAN 100
VLAN 200
Cisco Public
20
Service
Console
VM2
Virtual NICs
VMkernel
NIC
VSwitch A
VMkernel
Physical NICs
Physical
Switches
BRKDCT-2868
ESX
Server
dot1Q
VLAN 100
VLAN 200
Cisco Public
Service
Console
VM2
dot1Q
VM applied
Virtual NICs
VMkernel
NIC
VSwitch A
VMkernel
Physical NICs
Physical
Switches
BRKDCT-2868
dot1Q
VLAN 100
ESX
Server
VLAN 200
Windows E1000
Linux dot1q module
Cisco Public
22
Agenda
VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
BRKDCT-2868
Cisco Public
23
vmnic0
vmnic2
vmnic3
NIC Teaming
NIC Teaming
vNIC
vNIC
vNIC
vNIC
BRKDCT-2868
Cisco Public
24
Teaming is Configured at
The vmnic Level
BRKDCT-2868
Cisco Public
25
Design Example
2 NICs, VLAN 1 and 2, Active/Standby
Port-Group 1
VLAN 2
802.1q
Vlan 1,2
802.1q
Vlan 1,2
vmnic1
vmnic0
ESX Server
vSwitch0
Port-Group 2
VLAN 1
VM1
BRKDCT-2868
VM2
Service Console
Cisco Public
26
Active/Standby per-Port-Group
CBS-right
CBS-left
VMNIC1
VMNIC0
Port-Group2
Port-Group1
vSwitch0
VM5
.5
BRKDCT-2868
VM7
.7
ESX Server
Cisco Public
VM4
VM6
.4
.6
27
BRKDCT-2868
Cisco Public
28
Active/Active
ESX server
vmnic0
vmnic1
vSwitch
Port-Group
VM1
BRKDCT-2868
VM2
VM3
VM4
VM5
Cisco Public
29
Active/Active
IP-Based Load Balancing
Works with Channel-Group
mode ON
LACP is not supported
(see below):
Port-channeling
ESX server
vmnic0
vmnic1
vSwitch
Port-Group
9w0d: %EC-5-L3DONTBNDL2:
Gi1/0/14 suspended: LACP currently
not enabled on the remote port.
9w0d: %EC-5-L3DONTBNDL2:
Gi1/0/13 suspended: LACP currently
not enabled on the remote port.
VM1
BRKDCT-2868
Cisco Public
VM2
VM3
VM4
30
Agenda
VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
BRKDCT-2868
Cisco Public
31
CBS-left
NIC3
NIC2
NIC1
NIC4
Port-Group1
Port-Group2
vSwitch1
VM5
.5
BRKDCT-2868
VM7
.7
ESX Server
Cisco Public
VM4
VM6
.4
.6
32
Broadcast/Multicast/Unknown Unicast
Forwarding in Active/Active (1)
802.1q
Vlan 1,2
vmnic0
802.1q
Vlan 1,2
vmnic1
vSwitch0
Port-Group 1
VLAN 2
ESX Server
BRKDCT-2868
VM1
VM2
Cisco Public
33
Broadcast/Multicast/Unknown Unicast
Forwarding in Active/Active (2)
802.1q
Vlan 1,2
ESX Host
NIC1
802.1q
Vlan 1,2
NIC2
vSwitch
VM1
BRKDCT-2868
VM2
VM3
Cisco Public
34
E.g. HSRP?
NIC1
NIC2
vSwitch
VM1
BRKDCT-2868
VM2
Cisco Public
35
Catalyst2
802.1q
802.1q
ESX server1
VMNIC1
VMNIC2
vSwitch
VM5
.5
BRKDCT-2868
VM7
.7
Cisco Public
36
vSwitch Security
Promiscuous mode Reject
prevents a port from
capturing traffic whose
address is not the VMs
address
MAC Address Change,
prevents the VM from
modifying the vNIC
address
Forget Transmits prevents
the VM from sending out
traffic with a different MAC
(e.g NLB)
BRKDCT-2868
Cisco Public
37
BRKDCT-2868
Cisco Public
38
Agenda
VMware LAN Networking
vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
BRKDCT-2868
Cisco Public
39
BRKDCT-2868
Cisco Public
A: No
Q: Should I use Rolling Failover (i.e.
no preemption)
A: No, default is good, just enable
trunkfast on the Cisco switch
40
interface GigabitEthernetX/X
Enable Trunkfast
no ip address
switchport
switchport trunk encapsulation dot1q
switchport nonegotiate
no cdp enable
!
Cisco Public
41
Trunks
802.1q
NIC teaming
Active/Active
ESX Server
VMNIC2
VMNIC1
Port-Group
1
Port-Group Port-Group
3
2
vSwitch 0
Global
Active/Active
VST
VM1
HBA1
BRKDCT-2868
Cisco Public
Service
Console
VM2
VM Kernel
HBA2
Active/Standby
Vmnic1/vmnic2
Active/Standby
Vmnic2/vmnic1
42
Trunks
802.1q
NIC teaming
Active/Active
ESX Server
VMNIC2
VMNIC1
Port-Group
1
Port-Group Port-Group
3
2
vSwitch 0
Global
Active/Standby
Vmnic1/vmnic2
VST
VM1
HBA1
BRKDCT-2868
Cisco Public
Service
Console
VM2
VM Kernel
HBA2
Active/Standby
Vmnic2/vmnic1
Active/Standby
Vmnic2/vmnic1
43
root
802.1q:
Production,
SC, VMKernel
Trunkfast
BPDU guard
No Blocked Port,
No Loop
Catalyst1
Catalyst2
802.1q:
Production,
SC, VMKernel
802.1q
1
VMNIC1
ESX server1
BRKDCT-2868
2
VMNIC2
vSwitch
VMNIC1
vSwitch
Cisco Public
VMNIC2
ESX server 2
44
root
Secondary
root
Rapid PVST+
Trunkfast
BPDU guard
Typical Spanning-Tree
V-Shape Topology
802.1q:
Production,
SC, VMKernel
802.1q
1
VMNIC1
2
VMNIC2
vSwitch
2009 Cisco Systems, Inc. All rights reserved.
VMNIC1
VMNIC2
ESX server1
BRKDCT-2868
Cisco Public
45
BRKDCT-2868
Cisco Public
46
BRKDCT-2868
Cisco Public
47
BRKDCT-2868
Cisco Public
48
BRKDCT-2868
Cisco Public
49
BRKDCT-2868
Cisco Public
50
BRKDCT-2868
Cisco Public
51
BRKDCT-2868
Cisco Public
52
BRKDCT-2868
Cisco Public
53
BRKDCT-2868
Cisco Public
54
BRKDCT-2868
Cisco Public
55
BRKDCT-2868
Cisco Public
56
BRKDCT-2868
Cisco Public
57
root
802.1q:
Production,
SC, VMKernel
Trunkfast
BPDU guard
No Blocked Port,
No Loop
Catalyst1
Catalyst2
802.1q:
SC and VMKernel
802.1q:
Production
6
4
ESX server1
5
ESX server 2
vSwitch
BRKDCT-2868
Rapid PVST+
vSwitch
Cisco Public
58
root
Secondary
root
Rapid PVST+
Trunkfast
BPDU guard
Typical Spanning-Tree
V-Shape Topology
Catalyst2
Catalyst1
802.1q:
Production
802.1q:
SC and VMKernel
BRKDCT-2868
6
4
ESX server1
ESX server 2
vSwitch
vSwitch
Cisco Public
59
How About?
802.1q:
Production, SC, VMKernel
root
Secondary
root
Trunkfast
BPDU guard
Typical Spanning-Tree
V-Shape Topology
Catalyst2
Catalyst1
802.1q:
Production
802.1q:
SC and VMKernel
1
ESX server1
8
ESX server 2
vSwitch
vSwitch
BRKDCT-2868
6
4
Cisco Public
60
802.1q:
Production
7
4
vSwitch
vSwitch
ESX server1
BRKDCT-2868
802.1q:
SC, VMKernel
Cisco Public
ESX server 2
61
BRKDCT-2868
Cisco Public
62
Virtual
Machines
Cisco Public
63
VMotion L2 Design
Rack1
Rack10
vmnic0
vmnic0
vSwitch0
vmkernel
BRKDCT-2868
vSwitch0
vSwitch2
vSwitch1
vSwitch2
vmkernel
Service
console
ESX Host 2
vmnic2
vmnic2 vmnic3
vmnic1
VM4
Cisco Public
VM5
VM6
ESX Host 1
64
HA clustering (1)
EMC/Legato AAM based
Recommendations:
BRKDCT-2868
Cisco Public
Caveats:
Losing Production VLAN
connectivity only, ISOLATES VMs
(theres no equivalent of uplink
tracking on the vswitch)
Solution:
NIC TEAMING
65
HA clustering (2)
iSCSI access/VMkernel
10.0.200.0
COS 10.0.2.0
Prod 10.0.100.0
vmnic0
vmnic0
VM1
VM2
VM1
VM2
Cisco Public
66
Agenda
VMware LAN Networking
VMware SAN Designs
Storage Fundamentals
Storage Protocols
BRKDCT-2868
Cisco Public
67
BRKDCT-2868
Cisco Public
68
VMFS
VMFS Is High Performance Cluster File System
for Virtual Machines
Stores the entire virtual machine
state in a central location
Virtual Machines
ESX
Server
ESX
Server
ESX
Server
ESX
Server
VMFS
VMFS
VMFS
VMFS
Servers
Enable virtualization-based
distributed infrastructure services
such as VMotion, DRS, HA
A.vmdk
BRKDCT-2868
Cisco Public
69
Physical
disks
BRKDCT-2868
ESX Server
Storage Array
Cisco Public
70
BRKDCT-2868
Cisco Public
71
BRKDCT-2868
Cisco Public
72
Choices in Protocol
FC, iSCSI or NAS?
Best practice to leverage the existing infrastructure
Not to introduce too many changes all at once
Virtual environments can leverage all types
You can choose what fits best and even mix them
Common industry perceptions and trade offs still apply in the
virtual world
What works well for one does not work for all
BRKDCT-2868
Cisco Public
73
BRKDCT-2868
Cisco Public
74
FC SANConsiderations
Leverage multiple paths for high availability
Manually distribute I/O intensive VMs on
separate paths
Block access provides optimal performance for large
high transactional throughput work loads
Considered the industrial strength backbone for most
large enterprise environments
Requires expertise in storage management team
Expensive price per port connectivity
Increasing to 10 Gb throughput (Soon)
BRKDCT-2868
Cisco Public
75
iSCSIConsiderations
Uses standard NAS infrastructure
Best Practice to
Have dedicated LAN/VLAN to isolate from other network
traffic
Use GbE or faster network
Use multiple NICs or iSCSI HBAs
Use iSCSI HBA for performance environments
Use SW initiator for cost sensitive environments
Cisco Public
76
NFSConsiderations
Has more protocol overhead but less FS overhead than VMFS as
the NAS FS lives on the NAS Head
Simple to define in ESX by providing
Configure NFS server hostname or IP
NFS share
ESX Local datastore name
Cisco Public
77
Vmotion,
VCB
DRS & HA
ESX boot
from disk
FC SAN
Yes
Yes
Yes
Yes
Soon
Yes
Yes
Soon
No
Yes
No
No
iSCSI SAN
HW init
iSCSI SAN
SW init
NFS
BRKDCT-2868
Cisco Public
78
Cisco Public
79
iSCSI vs. FC
What is the trade off?
File system vs. LUN snapshots (host or array vs. Vmware VMFS
snapshots) which to pick?
Scalability (factors to consider)
# hosts, dynamic adding of capacity, practical vs. physical limits
BRKDCT-2868
Cisco Public
80
Cisco Public
81
Cisco Public
82
Poor utilization
Islands of allocations
More management
BRKDCT-2868
Increased utilization
Easier provisioning
Less management
Cisco Public
83
BRKDCT-2868
Cisco Public
84
BRKDCT-2868
Cisco Public
85
BRKDCT-2868
Cisco Public
86
BRKDCT-2868
Cisco Public
87
Cisco Public
88
StorageType of Access
RAW
VMFS
BRKDCT-2868
Cisco Public
89
BRKDCT-2868
Cisco Public
90
What Is iSCSI?
A SCSI transport protocol, enabling access to storage
devices over standard TCP/IP networks
Maps SCSI block-oriented storage over TCP/IP
Similar to mapping SCSI over Fibre Channel
BRKDCT-2868
Cisco Public
91
Cisco Public
92
VM
FC
VM
iSCSI/NFS
VM
FC
VM
DAS
VM
VM
SCSI
BRKDCT-2868
Cisco Public
93
HW
Hypervisor
Virtual
Servers
pWWN-P
Zone
BRKDCT-2868
Storage Array
(LUN Mapping and Masking)
FC
FC
pWWN-P
Cisco Public
FC Name Server
94
FC
FC
FC
FC
FC
FC
FC
FC
FC
NP_Port
F_Port
BRKDCT-2868
Cisco Public
95
VM1
FC
VM2
FC
RDM
Mapping
FC
VMFS
BRKDCT-2868
Cisco Public
96
Storage Multi-Pathing
No storage load balancing, strictly failover
Two modes of operation dictate behavior
(Fixed and Most Recent)
Fixed Mode
VM
VM
FC
FC
BRKDCT-2868
Cisco Public
97
Q and A
BRKDCT-2868
Cisco Public
98
Recommended Reading
BRKDCT-2868
Cisco Public
99
Agenda
VMware LAN Networking
VMware SAN Designs
VMware Virtual Networking
BRKDCT-2868
Cisco Public
100
Server Virtualization
BRKDCT-2868
Cisco Public
101
BRKDCT-2868
Cisco Public
102
BRKDCT-2868
Cisco Public
103
BRKDCT-2868
Cisco Public
104
BRKDCT-2868
Cisco Public
105
BRKDCT-2868
Cisco Public
106
BRKDCT-2868
Cisco Public
Model
Status
------------------ ---------Cisco Nexus 1000V
Cisco Nexus 1000V
active *
standby
ok
ok
107
Holdtme
Capability
Gig 1/5
136
Nexus
Gig 1/10
136
Nexus
Gig 1/12
136
Nexus
Port ID
N1KV-Rack10
1000V
Eth2/2
N1KV-Rack10
1000V
Eth3/5
N1KV-Rack10
1000V
Eth21/2
BRKDCT-2868
Local Intrfce
Cisco Public
108
BRKDCT-2868
Cisco Public
109
BRKDCT-2868
Cisco Public
110
BRKDCT-2868
Cisco Public
111
BRKDCT-2868
Cisco Public
112
BRKDCT-2868
Cisco Public
113
BRKDCT-2868
Cisco Public
114
BRKDCT-2868
Cisco Public
115
1.
2.
3.
4.
Nexus 1000VNexus
DVS1000V
Nexus
1000V -VEM
-VEM
Nexus 1000V
1000V -VEM
-VEM
Nexus
VMW ESX
ESX
VMW
VMW ESX
ESX
VMW
4.
Nexus 1000V
1000V
Nexus
Virtual Center
BRKDCT-2868
Server 1
Cisco Public
VSM
VSM
116
BRKDCT-2868
Cisco Public
117
BRKDCT-2868
Cisco Public
118
Server
VM
VM
#1
#1
VM
VM
#2
#2
VM
VM
#3
#3
VM
VM
#4
#4
NetFlow Collection
VMW ESX
ESX
VMW
Rate Limiting
QoS Marking (COS/DSCP)
Remote Port Mirror (ERSPAN)
Virtual Center
BRKDCT-2868
Cisco Public
Nexus 1000
119
BRKDCT-2868
Cisco Public
120
BRKDCT-2868
Cisco Public
121
BRKDCT-2868
Cisco Public
122
BRKDCT-2868
Cisco Public
123
BRKDCT-2868
Cisco Public
124
BRKDCT-2868
Cisco Public
125
BRKDCT-2868
Cisco Public
126
2.
3.
VM
VM
#1
#1
VM
VM
#2
#2
VM
VM
#4
#4
VM
VM
#5
#5
VM
VM
#6
#6
VM
VM
#7
#7
VM
VM
#8
#8
Nexus 1000V
1000V DVS
DVS
Nexus
Nexus 1000 -VEM
Nexus 1000 -VEM
VMW ESX
ESX
VMW
Once VMotion
completes, port on new
ESX host is brought up
& VMs MAC address
is announced to the
network
VM
VM
#3
#3
Nexus 1000V
1000V -VEM
-VEM
Nexus
VMW ESX
ESX
VMW
3.
Nexus 1000V
1000V
Nexus
Network Update
Virtual Center
BRKDCT-2868
Server 2
Server 1
Cisco Public
VSM
VSM
127
BRKDCT-2868
Cisco Public
128
Blade Servers
Nexus 1000V
Virtual Center
BRKDCT-2868
Cisco Public
VSM
129
BRKDCT-2868
Cisco Public
130
BRKDCT-2868
Cisco Public
131
BRKDCT-2868
Cisco Public
132
VM
#2
VM
#3
Server 2
VM
#4
VM
#5
VM
#6
VM
#7
Server 3
VM
#8
VM
#9
VM
#10
VM
#11
VEM
VEM
VEM
VMW ESX
VMW ESX
VMW ESX
VM
#12
VSM
VSM
VSM
Supports up to 64 VEMs
BRKDCT-2868
Cisco Public
133
Distributed Switching
Each Virtual Ethernet Module behaves like an independent switch
No address learning/synchronization across VEMs
No concept of Crossbar/Fabric between the VEMs
Virtual Supervisor is NOT in the data path
No concept of forwarding from an ingress linecard to an egress
linecard (another server)
No Etherchannel across VEMs
Nexus 1000V
VSM
BRKDCT-2868
VEM
VEM
VEM
VMW ESX
VMW ESX
VMW ESX
Cisco Public
134
BRKDCT-2868
Cisco Public
135
BRKDCT-2868
Cisco Public
136
Wireshark
ERSPAN
LLDP, CDP
EEM
Rollback
BRKDCT-2868
Cisco Public
137
BRKDCT-2868
Cisco Public
138
BRKDCT-2868
Cisco Public
139
Cisco Public
140
BRKDCT-2868
Cisco Public
141