You are on page 1of 90

Storage Scalability

Module 7

2015 VMware Inc. All rights reserved.


You Are Here

1. Course Introduction 7. Storage Scalability

2. vSphere Security 8. Storage Optimization

3. VMware Management 9. CPU Optimization


Resources
10. Memory Optimization
4. Performance in a Virtualized
11. Virtual Machine and Cluster
Environment
Optimization
5. Network Scalability
12. Host and Management
6. Network Optimization Scalability

VMware vSphere: Optimize and Scale 7-2


2015 VMware Inc. All rights reserved.
Importance
As the enterprise grows, new scalability features in VMware vSphere
enable the infrastructure to handle the growth efficiently.
Datastore growth and balancing issues can be addressed automatically
with VMware vSphere Storage DRS.

VMware vSphere: Optimize and Scale 7-3


2015 VMware Inc. All rights reserved.
Module Lessons
Lesson 1: Storage APIs and Virtual Machine Storage Policies
Lesson 2: vSphere Storage I/O Control
Lesson 3: Datastore Clusters and vSphere Storage DRS
Lesson 4: Enhanced Storage Configurations

VMware vSphere: Optimize and Scale 7-4


2015 VMware Inc. All rights reserved.
Lesson 1:
Storage APIs and Virtual Machine
Storage Policies

7-5
2015 VMware Inc. All rights reserved.
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
Describe VMware vSphere Storage APIs - Array Integration
Describe VMware vSphere API for Storage Awareness
Configure and use virtual machine storage policies

VMware vSphere: Optimize and Scale 7-6


2015 VMware Inc. All rights reserved.
About VMFS5

VMware vSphere VMFS5 provides several improvements in scalability


and performance over VMFS3:
The datastore and a single extent can be greater than 2 TB:
The maximum datastore size is 64 TB.
The maximum virtual disk size is 62 TB.

1 MB file system block size, which supports files up to 62 TB in size:


The file system subblock size is 8 KB.

Efficient storage of small files:


Data of small files (less than or equal to 1 KB) is stored directly in the file descriptor.

Support for the GUID Partition Table format


Raw device mappings have the following maximum sizes:
Physical compatibility mode: 64 TB
Virtual compatibility mode: 62 TB

VMware vSphere: Optimize and Scale 7-7


2015 VMware Inc. All rights reserved.
SCSI Reservations
VMFS is a clustered file system and uses SCSI reservations as part of its
distributed locking algorithms. A SCSI reservation causes a LUN to be
used exclusively by a single host for a brief period. SCSI reservations
are used by a VMFS instance to lock the file system while the VMFS
metadata is updated.
Operations that result in metadata updates require SCSI reservations:
Creating or deleting a virtual disk
Increasing the size of a VMFS volume
Creating or deleting snapshots
Increasing the size of a VMDK file

To minimize the effects on virtual machine performance, postpone major


maintenance and configuration until off-peak hours.
If the array supports vSphere Storage APIs - Array Integration and
hardware-assisted locking, SCSI reservations are not necessary.

VMware vSphere: Optimize and Scale 7-8


2015 VMware Inc. All rights reserved.
Virtual Disk Types

The type of virtual disk used by a virtual machine affects I/O performance.
Disk Type Description How to Performance Use Case
Create Impact
Eager-zeroed Space allocated and vSphere Extended creation Quorum drive
thick zeroed out at the Web Client time, but best in an MSCS
time of creation. or performance from cluster: Fault
vSphere Storage vmkfstools first write operation Tolerance.
APIs - Array
Integration can
offload zeroing out
to the array.

Lazy-zeroed Space allocated at vSphere Shorter creation Good for most


thick the time of creation, Web Client time, but reduced cases.
but zeroed on first or performance on first
write (default). vmkfstools write to a block
Thin Space allocated and vSphere Shorter creation Disk space
zeroed upon Web Client time, but reduced utilization is
demand. or performance on first the main
vmkfstools write to a block concern.

VMware vSphere: Optimize and Scale 7-9


2015 VMware Inc. All rights reserved.
About vSphere Storage APIs - Array Integration
vSphere Storage APIs - Array Integration helps storage vendors provide
hardware assistance to accelerate vSphere I/O operations that are more
efficiently accomplished in the storage hardware.
vSphere Storage APIs - Array Integration is a set of protocol interfaces
and VMkernel APIs between VMware ESXi and storage arrays:
Hardware Acceleration APIs allow arrays to integrate with vSphere to
transparently offload certain storage operations to the array.
Array Thin Provisioning APIs help prevent out-of-space conditions and perform
space reclamation.

VMware vSphere: Optimize and Scale 7-10


2015 VMware Inc. All rights reserved.
Hardware Acceleration APIs
The Hardware Acceleration APIs enable the ESXi host to offload specific
virtual machine and storage management operations to storage
hardware:
Use of these APIs significantly reduces the CPU overhead on the host.

Hardware acceleration is supported by block storage devices, Fibre


Channel, and iSCSI devices.
Hardware acceleration for block storage devices supports the following
array operations:
Full copy: Also called clone blocks or copy offload
Block zeroing: Also called write same
Hardware-assisted locking: Also called atomic test and set

VMware vSphere: Optimize and Scale 7-11


2015 VMware Inc. All rights reserved.
Hardware Acceleration for NAS
Hardware acceleration for NAS enables NAS arrays to integrate with
vSphere to offload certain storage operations to the array, such as offline
cloning:
This integration reduces CPU overhead on the host.

Hardware acceleration for NAS supports the following NAS operations:


Full file clone: The entire file is cloned instead of file segments.
Reserve space: Space for a virtual disk is allocated in thick format.
Lazy file clone: VMware Horizon View can offload the creation of linked
clones to a NAS array.
Extended file statistics

VMware vSphere: Optimize and Scale 7-12


2015 VMware Inc. All rights reserved.
Array Thin Provisioning APIs
Array Thin Provisioning APIs enable the host to integrate with physical
storage and become aware of space usage in thin-provisioned LUNs:
A VMware vSphere VMFS datastore that you deploy on the thin-provisioned
LUN can detect only the logical size of the LUN:
For example, if an array reports 2 TB of storage, but the array provides only 1 TB, the
datastore considers 2 TB to be the LUNs size.
Using thin-provisioning integration, the host can perform these tasks:
Monitor the use of space on thin-provisioned LUNs to avoid running out of physical
space.
Inform the array about datastore space that is freed when files are deleted or
removed from the datastore by VMware vSphere Storage vMotion:
The array reclaims the freed blocks of space.

The esxcli namespace has a command that allows deleted blocks to be


reclaimed on thin-provisioned LUNs that support the vSphere Storage APIs -
Array Integration primitive UNMAP:
esxcli storage vmfs unmap

VMware vSphere: Optimize and Scale 7-13


2015 VMware Inc. All rights reserved.
About vSphere API for Storage Awareness
vSphere API for Storage Awareness enables a storage vendor to
develop a software component, called a storage provider, for its storage
arrays:
A storage provider gets information from the storage array about available
storage topology, capabilities, and state.
Storage vCenter
Provider Server
vSphere
Storage Web Client
Device

VMware vCenter Server connects to a storage provider:


Information from the storage provider appears in VMware vSphere Web
Client.

VMware vSphere: Optimize and Scale 7-14


2015 VMware Inc. All rights reserved.
Benefits of Storage Providers
A storage provider supplies capability information in the form of
descriptions of specific storage attributes.
Storage providers benefit vSphere administrators by granting them
additional functionality:
Enable administrators to be aware of the topology, capabilities, and state of the
physical storage devices on which their virtual machines are located.
Enable administrators to monitor the health and usage of their physical storage
devices.
Assist administrators in choosing the correct storage in terms of space,
performance, and service-level agreement requirements:
Performed by using virtual machine storage policies

VMware vSphere: Optimize and Scale 7-15


2015 VMware Inc. All rights reserved.
Configuring a Storage Provider
If your storage supports a storage provider, use the vSphere Web Client
Manage > Storage Providers tab to register and manage the provider.

After you add a


storage provider,
it appears
on the Storage
Providers tab.

VMware vSphere: Optimize and Scale 7-16


2015 VMware Inc. All rights reserved.
About Virtual Machine Storage Policies
Virtual machine storage policies help you ensure that virtual machines
use storage that guarantees a specified level of capacity, performance,
availability, redundancy, and so on.
Virtual machine storage policies help you meet the following goals:
Categorize datastores based on certain levels of service.
Provision a virtual machines disks on the correct storage.

Virtual machine storage policies capture storage characteristics that


virtual machine home files and virtual disks require to run applications in
the virtual machine:
These storage characteristics are defined in a set of rules.

VMware vSphere: Optimize and Scale 7-17


2015 VMware Inc. All rights reserved.
Storage Policy Rules
Rules that you include in a storage
policy can be one of the following
types:
Rules based on storage-specific data Storage
services: Provider
Storage providers present storage
Rules Based on Rules Based
characteristics as data services offered Storage-Specific on Datastore
by the specific datastore type. Data Services Tags
Rules based on datastore tags:
You create datastore tags and vCenter Server
associate them with specific datastores.
Virtual Machine
Common rules: Storage Policies
Storage-specific data services and
datastore tags can be used to create the
rules that you include in virtual machine
storage policies.

VMware vSphere: Optimize and Scale 7-18


2015 VMware Inc. All rights reserved.
Storage Policies
Virtual machine storage
policies:
Contain one or more
Virtual Machine
rules
Storage Policies
Are associated with one
or more virtual machines
Can be used to test that
virtual machines reside
on compliant storage
Compliant
Compliant Compliant
Compliant Not Compliant

VMware vSphere: Optimize and Scale 7-19


2015 VMware Inc. All rights reserved.
Using the Virtual Machine Storage Policy
When you create, clone, or migrate a virtual machine, you can apply the
storage policy to the virtual machine.

VMware vSphere: Optimize and Scale 7-20


2015 VMware Inc. All rights reserved.
Checking Virtual Machine Storage Compliance
You can check whether virtual machines use datastores that are
compliant with the storage policy.

VMware vSphere: Optimize and Scale 7-21


2015 VMware Inc. All rights reserved.
Lab 6: Policy-Based Storage
Use policy-based storage to create tiered storage
1. Add Datastores for Use by Policy-Based Storage
2. Use vSphere Storage vMotion to Migrate a Virtual Machine to the Gold
Datastore
3. Configure Storage Tags
4. Create Virtual Machine Storage Policies
5. Assign Storage Policies to Virtual Machines

VMware vSphere: Optimize and Scale 7-22


2015 VMware Inc. All rights reserved.
Review of Learner Objectives
You should be able to meet the following objectives:
Describe VMware vSphere Storage APIs - Array Integration
Describe VMware vSphere API for Storage Awareness
Configure and use virtual machine storage policies
Describe Virtual Volumes requirements and architecture

VMware vSphere: Optimize and Scale 7-23


2015 VMware Inc. All rights reserved.
Lesson 2:
vSphere Storage I/O Control

7-24
2015 VMware Inc. All rights reserved.
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
Describe VMware vSphere Storage I/O Control
Configure vSphere Storage I/O Control

VMware vSphere: Optimize and Scale 7-25


2015 VMware Inc. All rights reserved.
About vSphere Storage I/O Control
VMware vSphere
Storage I/O Control Without With
vSphere Storage
vSphere Storage
allows cluster-wide I/O Control I/O Control
storage I/O Data Print Online Mail Data Print Online Mail
prioritization, which Mining Server Store Server Mining Server Store Server

allows better workload


consolidation and helps
reduce extra costs
associated with
overprovisioning.
With vSphere Storage
I/O Control, you can
balance I/O load in a
datastore cluster that is
enabled for vSphere
Storage DRS.
During High I/O from a Noncritical Application

VMware vSphere: Optimize and Scale 7-26


2015 VMware Inc. All rights reserved.
Configuring vSphere Storage I/O Control
With vSphere Storage I/O Control, you can ensure that the most
important virtual machines get adequate I/O resources even during times
of congestion.
To configure vSphere Storage I/O Control, you enable vSphere Storage
I/O Control for the datastore:
vSphere Storage I/O Control is disabled by default.

Then you set the number of storage I/O shares and the upper limit of I/O
operations per second (IOPS) for each virtual machine.

Example: Two virtual machines running Iometer (VM1:1,000 shares, VM2: 2,000 shares)

Without Shares or Limits With Shares or Limits

IOPS Iometer Latency IOPS Iometer Latency

VM1 1,500 20 ms 1,080 31 ms

VM2 1,500 21 ms 1,900 16 ms

VMware vSphere: Optimize and Scale 7-27


2015 VMware Inc. All rights reserved.
Monitoring Latency
When you enable vSphere Storage I/O Control, ESXi monitors datastore
latency and throttles the I/O load if the datastore average latency
exceeds the threshold.
By default, vSphere Storage I/O Control uses an injector-based model to
automatically detect the latency threshold
The benefit of using the injector-based model is that vSphere Storage
I/O Control determines the best threshold for a datastore.
vSphere Storage I/O Control is disabled by default when new datastores
are configured:
However, performance statistics are collected to be used to improve vSphere
Storage DRS behavior when vSphere Storage I/O Control is enabled.

VMware vSphere: Optimize and Scale 7-28


2015 VMware Inc. All rights reserved.
Automatic Threshold Detection
With the injector-based model, vSphere
Storage I/O Control determines the peak
throughput of the device, and sets the

Latency
threshold to 90 percent of that value: Lpeak
Automatic threshold detection works well when
L
a range of disk arrays and datastores are
configured.
The threshold varies according to the Load
performance of each datastore.
Or you can manually set the threshold value Tpeak
for a datastore:

Throughput
If this option is selected, the latency setting is
T
30 ms by default.

Load

VMware vSphere: Optimize and Scale 7-29


2015 VMware Inc. All rights reserved.
vSphere Storage I/O Control Requirements
vSphere Storage I/O Control has several requirements:
vSphere Storage I/O Control is supported only on datastores that are managed
by a single vCenter Server system.
vSphere Storage I/O Control is supported for Fibre Channel, iSCSI, and NFS
storage.
Raw device mapping (RDM) is not supported.

vSphere Storage I/O Control does not support datastores with multiple extents.
If your datastores are backed by arrays with automated storage tiering
capabilities, your array must be certified as compatible with vSphere Storage
I/O Control.

VMware vSphere: Optimize and Scale 7-30


2015 VMware Inc. All rights reserved.
Review of Learner Objectives
You should be able to meet the following objectives:
Describe VMware vSphere Storage I/O Control
Configure vSphere Storage I/O Control

VMware vSphere: Optimize and Scale 7-31


2015 VMware Inc. All rights reserved.
Lesson 3:
Datastore Clusters and vSphere
Storage DRS

7-32
2015 VMware Inc. All rights reserved.
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
Create a datastore cluster
Configure vSphere Storage DRS
Explain how vSphere Storage I/O Control and vSphere Storage DRS
complement each other

VMware vSphere: Optimize and Scale 7-33


2015 VMware Inc. All rights reserved.
About Datastore Clusters
A datastore cluster is a collection of
datastores that are grouped together
without functioning together.
2 TB
Datastore
A datastore cluster enabled for
Cluster
vSphere Storage DRS is a collection
of datastores working together to
balance capacity and I/O latency.
500GB 500GB 500GB 500GB

VMware vSphere: Optimize and Scale 7-34


2015 VMware Inc. All rights reserved.
Datastore Cluster Requirements
Datastores and hosts that are associated with a datastore cluster must
meet certain requirements.
Follow these guidelines when you create a datastore cluster:
Datastore clusters must contain similar or interchangeable datastores, with the
following exceptions:
NFS and VMFS datastores cannot be combined in the same datastore cluster.
Replicated datastores cannot be combined with non-replicated datastores in the
same datastore cluster enabled for vSphere Storage DRS.
All hosts attached to the datastores in a datastore cluster must be ESXi 5.0
and later.
Datastores shared across multiple data centers cannot be included in a
datastore cluster.
As a best practice, do not include datastores that have hardware acceleration
enabled in the same datastore cluster as datastores that do not have hardware
acceleration enabled.

VMware vSphere: Optimize and Scale 7-35


2015 VMware Inc. All rights reserved.
Relationship of Host Cluster to Datastore Cluster
The relationship between a VMware vSphere High Availability or a
vSphere DRS cluster and a datastore cluster can be one to one, one to
many, or many to many.

One to One One to Many Many to Many


Host Cluster Host Host Cluster Host Host Clusters

Datastore Cluster Datastore Clusters Datastore Clusters

VMware vSphere: Optimize and Scale 7-36


2015 VMware Inc. All rights reserved.
vSphere Storage DRS Overview
vSphere Storage DRS enables you to manage the aggregated resources
of a datastore cluster.
vSphere Storage DRS provides the following functions:
Initial placement of virtual machines based on storage capacity, and optionally
on I/O latency
Use of vSphere Storage vMotion to migrate virtual machines based on storage
capacity and, optionally, I/O latency.
Configuration in either manual or fully automated modes
Use of affinity and anti-affinity rules to govern virtual disk location
Use of fully automated, storage maintenance mode to clear a LUN of virtual
machine files

VMware vSphere: Optimize and Scale 7-37


2015 VMware Inc. All rights reserved.
Initial Disk Placement
Initial placement occurs when vSphere Storage DRS selects a datastore
in a datastore cluster on which to place a virtual machine disk.
When virtual machines are created, cloned, or migrated, you select a
datastore cluster, rather than a single datastore:
vSphere Storage DRS selects a member datastore based on capacity and
optionally on IOPS load.
By default, a virtual machines files are placed on the same datastore in
the datastore cluster:
This behavior can be changed by using vSphere Storage DRS anti-affinity
rules.

VMware vSphere: Optimize and Scale 7-38


2015 VMware Inc. All rights reserved.
vSphere Storage DRS Affinity Rules
You can create vSphere Storage DRS anti-affinity rules to control which virtual
disks should not be placed on the same datastore in a datastore cluster.

Datastore Cluster Datastore Cluster Datastore Cluster

Intra-VM VMDK affinity Intra-VM VMDK anti- VM anti-affinity:


(default): affinity: Keep virtual machines on
Keep a virtual machines Keep a virtual machines different datastores.
VMDKs together on the VMDKs on different Rule is similar to the vSphere
same datastore. datastores. DRS anti-affinity rule.
Maximize virtual machine Rule can be applied to all Maximize availability of a set
availability when all disks or a subset of a virtual of redundant virtual
are needed in order to run. machines disks. machines.

VMware vSphere: Optimize and Scale 7-39


2015 VMware Inc. All rights reserved.
Storage Migration Recommendations
vCenter Server displays migration recommendations on the vSphere
Storage DRS Recommendations page for datastore clusters that have
manual automation mode.
Migration recommendations are executed under the following
circumstances:
When the I/O latency threshold is exceeded
When the space utilization threshold is exceeded

Space utilization is checked every five minutes by default.


IOPS load history is checked every eight hours by default, if enabled.
vSphere Storage DRS selects a datastore based on utilization and,
optionally, I/O latency.
Load balancing is based on IOPS workload, which ensures that no
datastore exceeds a particular VMkernel I/O latency level.

VMware vSphere: Optimize and Scale 7-40


2015 VMware Inc. All rights reserved.
Datastore Correlation Detector
Datastore correlation refers to datastores that are created on the same
physical set of spindles.
vSphere Storage DRS detects datastore correlation by performing the
following operations:
Measuring individual datastore performance
Measuring combined datastore performance

If latency increases on multiple datastores when load is placed on one


datastore, then datastores are considered to be correlated.
Correlation is determined by a long-running background process.
Anti-affinity rules can use correlation detection to ensure that the virtual
machines or virtual disks are on different spindles.
Datastore correlation is enabled by default.

VMware vSphere: Optimize and Scale 7-41


2015 VMware Inc. All rights reserved.
Configuring vSphere Storage DRS Runtime Settings
vSphere Storage DRS thresholds can be configured to determine when
vSphere Storage DRS performs or recommends migrations.

Configuration
settings for Option for
utilized space utilization
threshold difference
threshold

Option for
setting I/O Options
latency to check for
threshold imbalances

VMware vSphere: Optimize and Scale 7-42


2015 VMware Inc. All rights reserved.
vSphere Storage DRS Maintenance Mode
vSphere Storage DRS maintenance mode enables you to take a
datastore out of use in order to service it.
vSphere Storage DRS maintenance mode evacuates virtual machines
from a datastore placed in maintenance mode:
Registered virtual machines (on or off) are moved.
Templates, unregistered virtual machines, ISO images, and nonvirtual machine
files are not moved.

VMware vSphere: Optimize and Scale 7-43


2015 VMware Inc. All rights reserved.
Backups and vSphere Storage DRS
Backing up virtual machines can add latency to a datastore.
You can schedule a task to disable vSphere Storage DRS behavior for
the duration of the backup.

VMware vSphere: Optimize and Scale 7-44


2015 VMware Inc. All rights reserved.
vSphere Storage DRS and vSphere Technology Compatibility
Several vSphere technologies are supported with vSphere Storage DRS.
Each technology has a recommended migration method.

Feature or Product Supported/Not Supported Migration Recommendation

VMware snapshots Supported Fully Automated

Raw device mapping pointer files Supported Fully Automated

VMware thin-provisioned disks Supported Fully Automated

VMware vSphere linked clones Supported Fully Automated

VMware vSphere Storage Metro cluster Supported Manual

VMware vCenter Site Recovery Fully Automated


Supported
Manager (from protected site)

VMware vCloud Director Supported Fully Automated

Fully Automated
VMware vSphere Replication Supported
(from protected site)

VMware vSphere: Optimize and Scale 7-45


2015 VMware Inc. All rights reserved.
vSphere Storage DRS and Array Feature Compatibility
Several storage arrays are supported with vSphere Storage DRS. Each
storage array has a recommended migration method.

Feature or Product Initial Placement Migration Recommendations

Array-based snapshots Supported Manual

Array-based deduplication Supported Manual

Array-based thin provisioning Supported Manual

Array-based autotiering Supported Manual (only capacity load balancing)

Array-based replication Supported Manual

VMware vSphere: Optimize and Scale 7-46


2015 VMware Inc. All rights reserved.
vSphere Storage DRS and vSphere Storage I/O Control
vSphere Storage DRS and vSphere Storage I/O Control are
complementary solutions:
vSphere Storage I/O Control is set to stats-only mode by default:
vSphere Storage DRS works to avoid I/O bottlenecks.
vSphere Storage I/O Control manages unavoidable I/O bottlenecks.

vSphere Storage I/O Control works in real time.


vSphere Storage DRS does not use real-time latency to calculate load
balancing.
vSphere Storage DRS and vSphere Storage I/O Control provide you with the
performance that you need in a shared environment, without having to
significantly overprovision storage.

VMware vSphere: Optimize and Scale 7-47


2015 VMware Inc. All rights reserved.
Lab 7: Managing Datastore Clusters
Create a datastore cluster and work with vSphere Storage DRS
1. Create a Datastore Cluster That Is Enabled by vSphere Storage DRS
2. Evacuate a Datastore Using Datastore Maintenance Mode
3. Run vSphere Storage DRS and Apply Migration Recommendations
4. Clean Up for the Next Lab

VMware vSphere: Optimize and Scale 7-48


2015 VMware Inc. All rights reserved.
Review of Learner Objectives
You should be able to meet the following objectives:
Create a datastore cluster
Configure vSphere Storage DRS
Explain how vSphere Storage I/O Control and vSphere Storage DRS
complement each other

VMware vSphere: Optimize and Scale 7-49


2015 VMware Inc. All rights reserved.
Lesson 4:
Enhanced Storage
Configurations

7-50
2015 VMware Inc. All rights reserved.
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
Describe the benefits of VMware vSphere Flash Read Cache
Explain the benefit of using Virtual Host Flash Swap Cache when virtual
machine swap files are stored on non-SSD datastores
Describe the interaction between Flash Read Cache, VMware vSphere
Distributed Resource Scheduler, and VMware vSphere High Availability
Describe the limitations of Flash Read Cache
Explain the purpose of a VMware Virtual SAN datastore
Describe the architecture and requirements of Virtual SAN
Describe VMware vSphere Virtual Volumes requirements and architecture

VMware vSphere: Optimize and Scale 7-51


2015 VMware Inc. All rights reserved.
About Flash Read Cache
Flash Read Cache accelerates
virtual machine performance by
Flash Read
Cache
Flash Read
Cache
Flash Read
Cache
using host-resident flash devices as
a cache.
Flash Read Cache Infrastructure

vSphere
Flash Read Cache integrates with
VMware vCenter Server, VMware
vSphere Distributed Resource
Scheduler, VMware vSphere
High Availability, and VMware
vSphere vMotion.
SSD SSD SSD SSD SSD

Flash Memory as a New Storage Tier in ESXi

VMware vSphere: Optimize and Scale 7-52


2015 VMware Inc. All rights reserved.
Flash Read Cache: Host-Based Flash Tier
Flash Read Cache host-based tier
has two components:
Flash Read Cache infrastructure:
Flash Read Cache
Pools local flash devices
Flash Read Cache infrastructure Provides flash-based resource
management
vSphere
Cache software:
ESXi host-based caching
Provides per-VMDK caching
CPU Memory Flash Resource
Flash Read Cache has several
benefits:
Easy to manage as a pooled resource
Targeted use is per VMDK
Transparent to the application and
guest operating system
SAN/NAS

VMware vSphere: Optimize and Scale 7-53


2015 VMware Inc. All rights reserved.
Reasons to Use Flash Read Cache
Using Flash Read Cache is beneficial for many reasons:
Flash Read Cache enables you to accelerate virtual machine performance
through the use of host-resident flash devices as a cache.
Flash Read Cache supports write-through and read caching:
Write-back and write caching are not supported.
Write-Through
Write
Commit
1
Cache 2
3
Ack

Data reads are satisfied from the cache, if present.


The use of Flash Read Cache significantly improves virtual machine read
performance.

VMware vSphere: Optimize and Scale 7-54


2015 VMware Inc. All rights reserved.
Primary Use Cases
The main use case for Flash Read
Cache is to accelerate
Flash Read
Cache
Flash Read
Cache
Flash Read
Cache
performance for read-intensive
workloads such as collaboration
Flash Read Cache infrastructure
applications, databases, and
vSphere middleware applications.

SSD SSD SSD SSD


SSD

Flash as a New Tier in ESXi

VMware vSphere: Optimize and Scale 7-55


2015 VMware Inc. All rights reserved.
Flash Read Cache Requirements
Flash Read Cache has the following requirements:
vCenter Server 5.5 or higher
ESXi host version 5.5 or later
A maximum of 32 hosts in a cluster
Virtual machine version 10 or higher
VMware vSphere Enterprise Plus Edition

You must use VMware vSphere Web Client to configure Flash Read
Cache.
To configure Flash Read Cache, the users vCenter Server role must
include the following privileges:
Host.Config.Storage
Host.Config.AdvancedConfig (for virtual flash resource configuration)

VMware vSphere: Optimize and Scale 7-56


2015 VMware Inc. All rights reserved.
Flash Read Cache Hardware Requirements
Flash Read Cache has the
following hardware requirements:
Host must be on the vSphere
Storage Arrays
hardware compatibility list.
Only solid-state drives (SSDs) are
used for a read cache.
SAN and local hard-disk drives SAS/SATA SSD
(HDDs) are used as a persistent
store.
VMware recommends that all PCIe Flash Cards
hosts in a cluster be identically
configured.
SAS/SATA HDD

VMware vSphere: Optimize and Scale 7-57


2015 VMware Inc. All rights reserved.
Flash Read Cache Configuration
Flash Read Cache configuration takes several iterative steps.

GO Repeat as If Repeat as
Stop
necessary. necessary necessary.

Configure
Configure a Configure virtual
Add SSD Flash Read
virtual flash flash host swap
capacity. Cache for each
resource. cache.
virtual machine.

VMware vSphere: Optimize and Scale 7-58


2015 VMware Inc. All rights reserved.
Flash Read Cache Management
You perform all the management tasks relating to the installation,
configuration, and monitoring of Flash Read Cache in the vSphere Web
Client.

VMware vSphere: Optimize and Scale 7-59


2015 VMware Inc. All rights reserved.
About Virtual Flash Resources
Each ESXi host creates a
virtual flash resource, which
contains one or more SSDs:
Individual SSDs must be
exclusively allocated to a
virtual flash resource.
Only one virtual flash resource
is allowed per ESXi host.
A virtual flash resource
contains one virtual flash
volume.

VMware vSphere: Optimize and Scale 7-60


2015 VMware Inc. All rights reserved.
About Virtual Flash Volumes
A virtual flash volume can be created from mixed resources.
The file system on a virtual flash volume is a derivative of VMFS that is
optimized for grouping flash devices.

vSphere

Virtual Flash Volume Virtual Flash Volume

SSD SSD SSD SSD

VMware vSphere: Optimize and Scale 7-61


2015 VMware Inc. All rights reserved.
Virtual Flash Volume Limits
Virtual flash volumes have several parameter limits.

Parameter Value per Host

Number of virtual flash volumes


1 (local only)
per host
Number of SSDs per virtual flash
8 or fewer
volume

SSD or flash size 4 TB or less

Virtual flash volume size 32 TB or less

VMware vSphere: Optimize and Scale 7-62


2015 VMware Inc. All rights reserved.
Flash Read Cache Use
When a virtual flash resource is created, several vSphere features can
use the resource:
Virtual Flash Host Swap Cache:
Provides the ability to use the virtual flash resource for memory swapping.
Provides legacy support for the swap-to-SSD option.

Flash Read Cache for virtual machines:


Provides virtual machine transparent flash access.
Per-VMDK cache configuration enables finer control.
Cache block size is 4 KB to 1024 KB.
The cache size should be big enough to hold the active working set of the workload.

VMware vSphere: Optimize and Scale 7-63


2015 VMware Inc. All rights reserved.
Virtual Flash Host Swap Cache
Virtual Flash Host Swap Cache is configured in vSphere Web Client.
vSphere flash swap caching can use up to 4 TB of vSphere flash
resource.

VMware vSphere: Optimize and Scale 7-64


2015 VMware Inc. All rights reserved.
Flash Read Cache in Use
Flash Read Cache
usage is assigned VMDK1 Without VMDK2 with
Flash Read Cache
Flash Read Cache
per VMDK.

Flash Read Cache software

vSphere Flash infrastructure

vSphere

SSD

VMware vSphere: Optimize and Scale 7-65


2015 VMware Inc. All rights reserved.
Flash Read Cache Interoperability
Flash Read Cache is
fully integrated with
vSphere vMotion,
vSphere DRS, and
vSphere HA:
vSphere vMotion
migration workflows
provide the option of
migrating the cache
contents.
Advanced settings
enable individual
VMDK migration
settings.

VMware vSphere: Optimize and Scale 7-66


2015 VMware Inc. All rights reserved.
Flash Read Cache Interoperability: vSphere vMotion
vSphere
To operate with Flash Read vMotion

Cache, vSphere vMotion Cross-Host


vSphere
migration is converted to Storage vMotion

cross-host vSphere Storage


vMotion migration:
Flash Read Flash Read
Cache Cache
The vCenter Server system
checks for sufficient virtual
flash resources on the
destination host.
Cross-host vSphere Storage
vMotion is the technology that
enables the migration of
nonshared storage. Storage
Array

VMware vSphere: Optimize and Scale 7-67


2015 VMware Inc. All rights reserved.
Flash Read Cache Interoperability: vSphere DRS
vSphere DRS manages virtual
Cross-Host
machines with Flash Read Cache
vSphere Storage
vMotion
reservations.
Virtual flash resources are managed at
the host level:
Flash Read Cache Flash Read Cache
No clusterwide knowledge about Flash
Read Cache availability or use exists.
vSphere DRS selects a host that has
available virtual flash capacity to start a
virtual machine.
No automatic virtual machine migration
for Flash Read Cache optimization
occurs:
Storage
Array vSphere DRS migrates a virtual machine
only for mandatory reasons or when
necessary to correct host overutilization.

VMware vSphere: Optimize and Scale 7-68


2015 VMware Inc. All rights reserved.
Flash Read Cache Interoperability: vSphere HA
If an ESXi host fails, virtual
machines are restarted on hosts
that have sufficient Flash Read
Restart
Cache resources:
A virtual machine cannot power on
if virtual flash resources are Flash Read Cache FlashvFlash
Read Cache
vFlash
insufficient. Cache Cache

If a virtual machine fails, it is


restarted on the same host. Failed

In both scenarios, the Flash Read


Cache contents are lost and
repopulated.
Storage
Array

VMware vSphere: Optimize and Scale 7-69


2015 VMware Inc. All rights reserved.
Virtual Flash Resource Management
A virtual machine cache has several characteristics:
Virtual machine cache is created only when virtual machines are powered on.
Space is reclaimed when virtual machines are powered off.
Virtual machine cache expands and shrinks dynamically, taking reservation into
account.
Virtual machine cache is migrated (optionally) when virtual machines are moved
to a different host.
Virtual flash resources have several characteristics:
Virtual flash resource is allocated dynamically across all powered-on, cache-
enabled virtual machines.
Virtual machines fail to power on if virtual flash resource capacity is insufficient to
satisfy the cache reservation.
Reservation defines the size of the VMDKs cache.
Shares are not supported.
The host flash cache resource cannot be overcommitted.

VMware vSphere: Optimize and Scale 7-70


2015 VMware Inc. All rights reserved.
Flash Read Cache: Performance Statistics Counters
In vSphere 6, a new set of
performance statistics
counters is available in
vCenter Server advanced
performance charts:
vFlashCacheIOPS
vFlashCacheLatency
vFlashCacheThroughput

VMware vSphere: Optimize and Scale 7-71


2015 VMware Inc. All rights reserved.
Flash Read Cache Limitations
Flash Read Cache has the following limitations:
Support for only locally attached SSDs
Not compatible with VMware vSphere Fault Tolerance
Cannot share an SSD with Virtual SAN or a VMFS datastore

VMware vSphere: Optimize and Scale 7-72


2015 VMware Inc. All rights reserved.
About Virtual SAN
vSphere offers support for Virtual SAN, a software-defined storage
solution.
Virtual SAN aggregates the direct-attached storage of ESXi hosts to
create a storage pool that can be used by virtual machines.
Virtual SAN has the following benefits:
vSphere and vCenter Server integration
Storage scalability
Built-in resiliency
SSD caching
Converged compute and storage resources

VMware vSphere: Optimize and Scale 7-73


2015 VMware Inc. All rights reserved.
Virtual SAN Architecture
Disks from multiple ESXi hosts are grouped to form a Virtual SAN
datastore.

vSphere

Virtual SAN Datastore


Virtual SAN Cluster

Disk Group Disk Group Disk Group

VMware vSphere: Optimize and Scale 7-74


2015 VMware Inc. All rights reserved.
Object-Based Storage
Virtual SAN stores and manages data in the form of flexible data
containers called objects.

Object Object Container Object


VMDK VMDK Virtual Machines
File File Metadata Files

vSphere

Virtual SAN Datastore


Virtual SAN Cluster

Disk Group Disk Group Disk Group

VMware vSphere: Optimize and Scale 7-75


2015 VMware Inc. All rights reserved.
Virtual Machine Storage Policies
Virtual Machine Virtual machine storage policies are
Storage Policy
created before virtual machine
Capacity deployment to reflect the
Availability requirements of the applications
Performance running in the virtual machine.
The storage policy is based on the
virtual SAN capabilities.
Based on virtual machine
vSphere requirements, the appropriate policy
is selected at deployment time.
Virtual SAN Datastore

Disk Group Disk Group


Virtual SAN Cluster

VMware vSphere: Optimize and Scale 7-76


2015 VMware Inc. All rights reserved.
Virtual SAN Requirements
A Virtual SAN cluster has the following requirements:
Minimum of three hosts contributing local disks, running ESXi 5.5 or later, and
managed by vCenter Server 5.5 or later:
Each host must have at least one SSD and one HDD.

Minimum of 4 GB RAM on each host


Dedicated network connecting hosts in the cluster:
10 Gb NIC for each host is recommended.
Two 10 Gb NICs are preferred for fault tolerance purposes.

SSD capacity should make up at least 10 percent of the total storage


capacity.
Virtual SAN does not support datastores with greater than 2 TB capacity.
Virtual SAN and Flash Read Cache are not compatible.

VMware vSphere: Optimize and Scale 7-77


2015 VMware Inc. All rights reserved.
About vSphere Virtual Volumes Functionality
vSphere Virtual Volumes functionality changes the storage management
paradigm from managing space inside datastores to managing abstract
storage objects handled by storage arrays:
Abstract storage containers replace traditional storage volumes based on
LUNs or NFS shares.
Rather than arranging storage around features of a storage system,
vSphere Virtual Volumes arranges storage around the needs of
individual virtual machines.
vSphere Virtual Volumes supports Fibre Channel, FCoE, iSCSI, and
NFS.

VMware vSphere: Optimize and Scale 7-78


2015 VMware Inc. All rights reserved.
vSphere Virtual Volumes Architecture
Virtual and physical components interact with one another to provide the
vSphere Virtual Volumes functionality.
The following components constitute the vSphere Virtual Volumes
architecture:
Virtual volumes
Storage containers
vSphere Virtual Volumes datastores
vSphere Virtual Volumes storage providers
Protocol endpoints

VMware vSphere: Optimize and Scale 7-79


2015 VMware Inc. All rights reserved.
Virtual Volumes
Virtual volumes are objects that are stored natively inside a storage
system that is connected through Ethernet or SAN:
They are managed entirely by hardware on the storage side.

vSphere Virtual Volumes maps virtual


machine files directly to virtual
volumes, enabling vSphere Virtual
Volume
Virtual
Volume
Virtual
Volume
Virtual
Volume
Virtual
Volume
to offload intensive storage
operations, such as snapshot,
cloning, and replication, to
the storage system.

Storage
Array

VMware vSphere: Optimize and Scale 7-80


2015 VMware Inc. All rights reserved.
Storage Containers and vSphere Virtual Volumes Datastores
Virtual volumes are grouped
into one or more storage vCenter Server
containers.
Virtual Volume Virtual Volume
A storage container is part of Datastore 1 Datastore 2
the underlying hardware and
logically groups virtual volumes
based on management and
administrative needs: Fibre Channel,
FCoE, iSCSI, NFS
For example, the storage
container can contain all
virtual volumes for a
Virtual Virtual Virtual Virtual Virtual
particular organization. Volume Volume Volume Volume Volume

A vSphere Virtual Volumes Storage Container 1 Storage Container 2


datastore maps to a specific
storage container.
Storage
Array

VMware vSphere: Optimize and Scale 7-81


2015 VMware Inc. All rights reserved.
vSphere Virtual Volumes Storage Providers
A vSphere Virtual Volumes
storage provider is a software vCenter Server
component that mediates
communication between Virtual Volume Virtual Volume
vSphere on one side and a Datastore 1 Datastore 2

storage system on the other.


The vSphere Virtual Volumes
storage provider delivers
storage information from the vSphere Virtual Volumes
underlying storage container, so Storage Provider
that storage container
capabilities appear in vCenter Virtual Virtual Virtual Virtual Virtual
Volume Volume Volume Volume Volume
Server and the vSphere Web
Storage Container 1 Storage Container 2
Client.
Storage
Array

VMware vSphere: Optimize and Scale 7-82


2015 VMware Inc. All rights reserved.
Protocol Endpoints
ESXi uses protocol endpoints
vCenter Server
to establish a data path on
demand from virtual
machines to their respective Virtual Volume Virtual Volume
virtual volumes. Datastore 1 Datastore 2

Protocol endpoints are


configured on the storage Fibre Channel,
side, one or several per FCoE, iSCSI, NFS
storage container.
Protocol
Each virtual volume is bound Endpoint
to a specific protocol
endpoint: Virtual
Volume
Virtual
Volume
Virtual
Volume
Virtual
Volume
Virtual
Volume

A single protocol endpoint can Storage Container 1 Storage Container 2

connect to many virtual


Storage Storage
volumes. Array
Provider

VMware vSphere: Optimize and Scale 7-83


2015 VMware Inc. All rights reserved.
Mapping Virtual Machine Files to Virtual Volumes
Virtual volumes are not preprovisioned, but created automatically when
you perform virtual management operations.
The storage system
creates virtual volumes Virtual Volume Virtual Volume
vCenter Server
for the core elements Datastore 1 Datastore 2

that make up the


virtual machine:
Data virtual volumes
Snapshot
Snapshot Config
Config VMDK
VMDK Config
Config VMDK
VMDK
Configuration
File
File Files
Files File
File Files
Files File
File
virtual volumes
Additional virtual Protocol Endpoint Protocol Endpoint
volumes can be
created for other
Virtual Virtual Virtual Virtual Virtual
components, such as Volume Volume Volume Volume Volume
snapshots and clones. Storage Container 1 Storage Container 2

Storage Array

VMware vSphere: Optimize and Scale 7-84


2015 VMware Inc. All rights reserved.
vSphere Virtual Volumes Characteristics
vSphere Virtual Volumes offers several benefits and advantages:
You can configure multipathing on a SCSI-based protocol endpoint, but not on
an NFS-based protocol endpoint.
Virtual machine provisioning uses policy-based management to match storage
data services to virtual machine requirements.
Operations such as snapshots, cloning, and load balancing are offloaded to
storage hardware.
vSphere features such as vSphere vMotion, vSphere Storage vMotion,
snapshots, linked clones, and VMware vSphere Distributed Resource
Scheduler are supported
Snapshot facilities that are native to the storage system are used to improve
the performance of vSphere snapshots.

VMware vSphere: Optimize and Scale 7-85


2015 VMware Inc. All rights reserved.
vSphere Virtual Volumes Requirements
To work with vSphere Virtual Volumes, you must ensure that your
storage and vSphere environment are set up correctly.
Follow these guidelines to prepare your storage system environment for
vSphere Virtual Volumes:
The storage system must support vSphere Virtual Volumes and integrate with
vSphere through vSphere API for Storage Awareness.
A vSphere Virtual Volumes storage provider must be deployed.
Protocol endpoints, storage containers, and storage profiles must be
configured on the storage side.
Follow these guidelines to prepare your vSphere environment:
Follow appropriate setup guidelines for the type of storage that you use.
Use Network Time Protocol (NTP) to ensure that all systems on your vSphere
network have their clocks synchronized.

VMware vSphere: Optimize and Scale 7-86


2015 VMware Inc. All rights reserved.
Deploying vSphere Virtual Volumes
You must complete a series of steps to configure your vSphere Virtual
Volumes environment in vSphere:
Register the vSphere Virtual Volumes storage provider in vCenter Server.
Create a vSphere Virtual Volumes datastore.
Review and manage protocol endpoints.
(Optional) Change the path selection policy for a protocol endpoint.

After deploying virtual volumes, you can provision virtual machines on


the vSphere Virtual Volumes datastore.
If your virtual machine has specific storage requirements, you can create
a vSphere Virtual Volumes storage policy to define these requirements
and assign the new policy to the virtual machine.
If no storage policies are created for the vSphere Virtual Volumes datastores,
then the system used the default No Requirements policy.

VMware vSphere: Optimize and Scale 7-87


2015 VMware Inc. All rights reserved.
Lab 8: Working with Virtual Volumes
Configure NFS-backed and iSCSI-backed virtual volumes
1. Validate the Storage Endpoint
2. Create a vSphere Virtual Volumes Datastore Using NAS
3. Create a vSphere Virtual Volumes Datastore Using iSCSI
4. Migrate a Virtual Machine to a Virtual Volume

VMware vSphere: Optimize and Scale 7-88


2015 VMware Inc. All rights reserved.
Review of Learner Objectives
You should be able to meet the following objectives:
Describe the benefits of VMware vSphere Flash Read Cache
Explain the benefit of using Virtual Host Flash Swap Cache when virtual
machine swap files are stored on non-SSD datastores
Describe the interaction between Flash Read Cache, VMware vSphere
Distributed Resource Scheduler, and VMware vSphere High Availability
Describe the limitations of Flash Read Cache
Explain the purpose of a VMware Virtual SAN datastore
Describe the architecture and requirements of Virtual SAN
Describe VMware vSphere Virtual Volumes requirements and architecture

VMware vSphere: Optimize and Scale 7-89


2015 VMware Inc. All rights reserved.
Key Points
vSphere Storage APIs - Array Integration consists of APIs for hardware
acceleration and Array Thin Provisioning.
VMware vSphere API for Storage Awareness enables storage vendors to
provide information about the capabilities of their storage arrays to vCenter
Server.
Policy-based storage is a feature that introduces storage compliance to
vCenter Server.
Virtual Volumes maps virtual machine files to virtual volumes, enabling
vSphere to offload intensive storage operations to the storage system.
vSphere Storage I/O Control enables clusterwide storage I/O prioritization.
A datastore cluster enabled for vSphere Storage DRS is a collection of
datastores working together to balance storage capacity and I/O latency.
Questions?

VMware vSphere: Optimize and Scale 7-90


2015 VMware Inc. All rights reserved.

You might also like