Professional Documents
Culture Documents
2
Network Design Guide
TECHNICAL WHITE PAPER
APRIL 2016
VMware Virtual SAN Network Design Guide
Contents
Intended Audience .................................................................................................................................................... 2
Overview ....................................................................................................................................................................... 2
Virtual SAN Network ................................................................................................................................................ 2
Physical network infrastructure ...................................................................................................................... 3
Data center network ....................................................................................................................................... 3
Oversubscription considerations ............................................................................................................... 3
Host network adapter .................................................................................................................................... 7
Virtual network infrastructure.......................................................................................................................... 8
VMkernel network ............................................................................................................................................ 8
Virtual Switch .................................................................................................................................................... 8
NIC teaming ....................................................................................................................................................... 9
Multicast ............................................................................................................................................................ 10
Network I/O Control ..................................................................................................................................... 10
Jumbo Frames .................................................................................................................................................. 11
Switch Discovery Protocol ............................................................................................................................... 12
Network availability ............................................................................................................................................ 12
Conclusion ................................................................................................................................................................... 12
About the Author ..................................................................................................................................................... 13
Appendix...................................................................................................................................................................... 14
Multicast configuration examples. ..................................................................................................................... 14
References ................................................................................................................................................................... 15
Intended Audience
This document is targeted toward virtualization, network, and storage
architects interested in deploying VMware Virtual SAN solutions.
Overview
Virtual SAN is a hypervisor-converged, software-defined storage solution for
the software-defined data center. It is the first policy-driven storage product
designed for VMware vSphere environments that simplifies and streamlines
storage provisioning and management.
Virtual SAN is a distributed, shared storage solution that enables the rapid
provisioning of storage within VMware vCenter Server as part of virtual
machine creation and deployment operations. Virtual SAN uses the concept
of disk groups to pool together locally attached flash devices and magnetic
disks as management constructs. Disk groups are composed of at least cache
device and several magnetic or flash capacity devices. In Hybrid architectures,
flash devices are used as read cache and write buffer in front of the magnetic
disks to optimize virtual machine and application performance. In all flash the
cache device endurance is leveraged to allow lower cost capacity devices.
The Virtual SAN datastore aggregates the disk groups across all hosts in the
Virtual SAN cluster to form a single shared datastore for all hosts in the
cluster.
Virtual SAN requires correctly configured network for virtual machine I/O as
well as communication among cluster nodes. Since the majority of virtual
machine I/O travels the network due to the distributed storage architecture,
highly performing and available network configuration is critical to a
successful Virtual SAN deployment.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
or not. Virtual SAN requires a dedicated VMkernel port type and uses a
proprietary transport protocol for Virtual SAN traffic between the hosts.
Oversubscription considerations
VMware Virtual SAN requires low latency and ample throughput between the
hosts, as reads may come from any host in the cluster, and writes must be
acknowledged by two hosts. For simple configurations utilizing modern, wire
speed, top of rack switches, this is a relatively simple consideration as all ports
can speak wire speed to all ports. As clusters are stretched across
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
datacenters (perhaps using the Virtual SAN fault domains feature), the
potential for oversubscription become a concern. Typically, the largest
demand for throughput is during a host rebuild or host evacuation as
potentially all hosts may be requesting to send and receive traffic at wire
speed to reduce the time of the action. The larger the capacity consumed on
each host, the more important the over subscription ratio becomes. A host
with only 1Gbps and 12TB of capacity would take over 24 hours to refill with
data.
Leaf-spine
In traditional leaf-spine architecture, due to the full mesh topology and port
density constraints, leaf switches are normally oversubscribed for bandwidth.
For example, a fully utilized 10GbE uplink utilized by the Virtual SAN network
in reality may only achieve 2.5Gbps throughput on each node when the leaf
switches are oversubscribed at a 4:1 ratio and Virtual SAN traffic needs to go
across the spine, as illustrated in Figure 2.
The impact of network topology on available bandwidth should be considered
when designing your Virtual SAN cluster.
The leaf switches are fully meshed to the spine switches with links that could
either be switched or routed, these are referred to as Layer 2 and Layer 3 leaf-
spine architectures respectively. Virtual SAN over layer 3 networks is currently
supported.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
Here is an example of how over commitment can impact rebuild times. Let us
assume the the above design is used with 3 fault domains, and data is being
mirrored between cabinets. In this example each host has 10TB of raw
capacity, with 6TB of it being used for virtual machines protected by FTT=1. In
this case we will also assume 3/4ths (or 30Gbps) of the available bandwidth is
available for rebuild. Assuming no disk contention bottlenecks it would take
approximately 26 minutes to rebuild over the over subscribed link.
If the capacity needing to rebuild was increased to 12TB of data, and the
bandwidth was reduced to only 10Gbps, then the rebuild would take at a
minimum 156 minutes. Any time capacity increases, or bandwidth between
hosts is decreased the time for rebuilds becomes longer.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
ECMP
It should be noted that fabric extending devices such as the Cisco Nexus
2000 product line have unique considerations. These devices lack the ability
for port to port direct traffic on the same switch, and all traffic must travel
through the uplink to the Nexus 5000 or 7000 series device and back down.
While this will increase port to port latency, the larger concern is large
throughput operations (such as a host rebuild) will potentially put pressure on
the over subscribed uplinks back to the switch.
Flow Control
Pause Frames are related to Ethernet flow control and are used to manage
the pacing of data transmission on a network segment. Sometimes, a sending
node (ESXi/ESX host, switch, etc.) may transmit data faster than another
node can accept it. In this case, the overwhelmed network node can send
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
pause frames back to the sender, pausing the transmission of traffic for a brief
period of time.
VMware Virtual SAN, like other IP storage traffic, is not encrypted and should
be deployed to isolated networks. VLANs can be leveraged to securely
separate Virtual SAN traffic from virtual machine and other networks. Security
can also be added at a higher layer by encrypting data in guest in order to
meet security and compliance requirements.
At least one physical NIC must be used for Virtual SAN network. One or
more additional physical NICs are recommended to provide failover
capability. The physical NIC(s) can be shared amongst other vSphere
networks such as virtual machine network and vMotion network.
Logical Layer2 separation of Virtual SAN VMkernel traffic (VLANs) is
recommended when physical NIC(s) share traffic types. QoS can be
provided for traffic types via Network IO Control (NIOC).
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
Unlike multiple-NIC vMotion, Virtual SAN does not support multiple VMkernel
adapters on the same subnet.
Virtual Switch
VMware Virtual SAN supports both VSS and VDS virtual switches. It should be
noted that VDS licensing is included with VMware Virtual SAN and licensing
should not be a consideration when choosing a virtual switch type. As VDS is
required for dynamic LACP (Link Aggregation Control Protocol), LBT (Load
Based Teaming), LLDP (Link Layer Discovery Protocol), bi-directional CDP
(Cisco Discovery Protocol), and Network IO Control (NIOC) VDS is preferred
for superior performance operational visibility, and management capabilities.
VMware recommends: Deploying VDS for use with VMware Virtual SAN.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
NIC teaming
Virtual SAN network can use teaming and failover policy to determine how
traffic is distributed between physical adapters and how to reroute traffic in
the event of adapter failure. NIC teaming is used mainly for high availability,
but not load balancing when the team is dedicated for Virtual SAN. However,
additional vSphere traffic types sharing the same team could still leverage the
aggregated bandwidth by distributing different types of traffic to different
adapters within the team. Virtual SAN supports all VSS and VDS supported
NIC teaming options.
Route based on physical NIC load, also known as Load Based Teaming (LBT),
allows vSphere to balance the load on multiple NICs without a custom switch
configuration. It begins balancing similar to Virtual Port ID, but will
dynamically reassess physical to virtual NIC bindings every 30 seconds based
on congestion thresholds. To prevent impact on port change settings such as
Ciscos portfast or HP admin-edge-port on ESXi host facing physical
switch ports should be configured. With this setting, network convergence on
these switch ports will happen fast after the failure because the port will enter
the Spanning tree forwarding state immediately, bypassing the listening and
learning states. Additional information can be found on different teaming
policies in the vSphere networking documentation.
IP Hash Policy
One failover path option is the IP hash based policy. Under this policy, Virtual
SAN, either alone or together with other vSphere workloads, is capable of
balancing load between adapters within a team, although there is no
guarantee of performance improvement for all configurations. While Virtual
SAN does initiate multiple connections, there is no deterministic balancing of
traffic. This policy requires the physical switch ports to be configured for a
port link aggregation technology or port-channel architecture such as Link
Aggregation Control Protocol (LACP) or EtherChannel. Only static mode
EtherChannel is supported with the vSphere Standard Switch. LACP is
supported only with vSphere Distributed Switch.
VMware recommends: Use Load Based Teaming or for load balancing, and
appropriate spanning tree port configurations are taken into account.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
Multicast
IP multicast sends source packets to multiple receivers as a group
transmission. Packets are replicated in the network only at the points of path
divergence, normally switches or routers, resulting in the most efficient
delivery of data to a number of destinations with minimum network
bandwidth consumption. For examples of Multicast configuration please see
the Layer 2/Layer 3 network topologies white paper.
Virtual SAN uses multicast to deliver metadata traffic among cluster nodes for
efficiency and bandwidth conservation. Multicast is required for VMkernel
ports utilized by Virtual SAN. While Layer 3 is supported, Layer 2 is
recommended to reduce complexity. All VMkernel ports on the Virtual SAN
network subscribe to a multicast group using Internet Group Management
Protocol (IGMP). IGMP snooping configured with an IGMP snooping querier
can be used to limit the physical switch ports participating in the multicast
group to only Virtual SAN VMkernel port uplinks. The need to configure an
IGMP snooping querier to support IGMP snooping varies by switch vendor.
Consult your specific switch vendor/model best practices for IGMP snooping
configuration. If deploying a Virtual SAN cluster across multiple subnets, be
sure to review best practices and limitations in scaling Protocol Independent
Multicast (PIM) dense or sparse node.
VMware recommends: isolating each Virtual SAN clusters traffic to its own
VLAN to when using multiple clusters.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
Always assign a reasonably high relative share for the Fault Tolerance
resource pool because FT is a very latency-sensitive traffic type.
Leverage the VDS Port Group and Traffic Shaping Policy features for
additional bandwidth control on different resource pools.
Set a relative share for the Virtual SAN resource pool based on
application performance requirements on storage, also holistically
taking into account other workloads such as bursty vMotion traffic that
is required for business mobility and availability.
Avoid reservations as they will share unused traffic only with other
management types (vMotion, Storage etc.) but not with Virtual Machine
networking needs.
Jumbo Frames
Virtual SAN supports jumbo frames, but does not require them. VMware
testing finds that using jumbo frames can reduce CPU utilization and improve
throughput, however, with both gains at minimum level because vSphere
already uses TCP Segmentation Offload (TSO) and Large Receive Offload
(LRO) to deliver similar benefits.
In data centers where jumbo frames are already enabled in the network
infrastructure, jumbo frames are recommended for Virtual SAN deployment. If
jumbo frames are not currently in use, Virtual SAN alone should not be the
justification for deploying Jumbo Frames.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
vSphere supports Cisco Discovery Protocol (CDP) and Link Layer Discovery
Protocol (LLDP). CDP is available for vSphere Standard Switches and vSphere
Distributed Switches connected to Cisco physical switches.
VMware Recommends: enable LLDP or CDP in both send and receive mode.
Network availability
For high availability, Virtual SAN network should have redundancy in both
physical and virtual network paths and components to avoid single points of
failure. The architecture should configure all port groups or distributed virtual
port groups with at least two uplink paths using different NICs that are
configured with NIC teaming, set a failover policy specifying the appropriate
active-active or active-standby mode, and connect each NIC to a different
physical switch for an additional level of redundancy.
VMware recommends: redundant uplinks for Virtual SAN and all other traffic.
Conclusion
Virtual SAN Network design should be approached in a holistic fashion, taking
into account other traffic types utilized in the vSphere cluster in addition to
the Virtual SAN network. Other factors to consider should be the physical
network topology, and the overprovisioning posture of your physical switch
infrastructure.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
Virtual SAN requires a 1GbE network at the minimum for hybrid clusters and
10Gbps for all flash clusters. As a best practice, VMware strongly recommends
10GbE network for Virtual SAN to avoid the possibility of the network
congestion leading to degraded performance. A 1GbE network can easily be
saturated by Virtual SAN traffic and teaming of multiple NICs can only
provide availability benefits in limited cases. If 1GbE network is used, VMware
recommends it be used for smaller clusters, and be to be dedicated to Virtual
SAN traffic.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
Appendix
Switch#configure
Switch(config)# VLAN 500
Switch(config vlan 500)# multicast disable igmp snoop
Switch(config vlan 500)# do write memory
Brocade VDX Guide (See guide for Virtual SAN VDX configuration)
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware Virtual SAN Network Design Guide
References
1. Virtual SAN Product Page
http://www.vmware.com/products/virtual-san/
2. VMware Virtual SAN Hardware Guidance,
http://www.vmware.com/files/pdf/products/vsan/VMware-TMD-Virtual-SAN-
Hardware-Guidance.pdf
3. VMware NSX Network Virtualization Design Guide,
http://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-
virtualization-design-guide.pdf
4. VMware Network Virtualization Design Guide,
http://www.vmware.com/files/pdf/techpaper/Virtual-Network-Design-Guide.pdf
5. Understanding IP Hash Load Balancing,
VMware Knowledge Base Article 2006129
6. Sample configuration of EtherChannel / Link Aggregation Control Protocol
(LACP) with ESXi/ESX and Cisco/HP switches,
VMware Knowledge Base Article 1004048
7. Changing the multicast address used for a VMware Virtual SAN Cluster, VMware
Knowledge Base Article 2075451
8. Understanding TCP Segmentation Offload (TSO) and Large Receive Offload
(LRO) in a VMware environment,
VMware Knowledge Base Article 2055140
9. IP Multicast Technology Overview,
http://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/ip_multicast/White_
papers/mcst_ovr.pdf
10. Essential Virtual SAN: Administrators Guide to VMware Virtual SAN by Cormac
Hogan, Duncan Epping
11. VMware Network I/O Control: Architecture, Performance and Best Practices,
http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.