You are on page 1of 15

VCP 5 Objective 5.

1 Create and Configure VMware Clusters


Describe DRS virtual ac!ine entitleent
When available resources do not meet the demands of the environment, thats when
contention occurs. When contention occurs you need to know how many resources
that each VM will consume or is entitled to. In order to do this, you use the resource
allocation settings of the VMs The settings are broken down into three categories
vSphere DRS automation levels explained
There are three levels from which to choose.
Manual
The DRS cluster will make recommendations to an administrator, but no automated actions will be taken. The
administrator must manually carry out any recommendations. This is a good setting if you just want to see what
impact DRS might have on your environment.
Partially automated
Partially automated DRS clusters are pretty common. lusters configured for partial automation will automatically
place new virtual machines on e!isting hosts based on performance and resource criteria. "owever, after the initial
placement event, which may result in recommendations to move other workloads to accommodate the new virtual
machine, DRS operates the same way that it does when #anual DRS is use.
Fully automated
#any administrators are loathe to allow DRS to simply work to its will through the fully automated option. $hen this
option is selected, DRS will provide initial placement services as described earlier, but it will also move workloads
around the cluster without administrator intervention if there is a chance to better balance workloads running inside
the cluster. The administrator is able to specify the level of sensitivity using what is called the #igration Threshold.
%ou can configure DRS to initiate a migration when there is any associated performance improvement or you can
choose to be a bit more conservative and wait until DRS finds that an operation will have a significant positive impact.
DRS requirements
&s you may imagine, a service like DRS carries with it some re'uirements(
)irst and foremost, there must be some kind of shared storage * such as a S&+ or +&S * in use by the
hosts participating in the DRS cluster.
#ake sure that all ,#)S volumes are accessible by all hosts in the cluster and that there is sufficient space
in the ,#)S volumes to store the necessary virtual machines.
"owever, in what is generally the most difficult prere'uisite to attain, administrators must take steps to
ensure processor compatibility between all of the hosts in the cluster. "ere-s the challenge( $hen a
workload is migrated to another host, the running state of that virtual machine goes along with it. .n order for
the process to be successful, the destination host-s processors must be able to resume e!ecution as if the
workload were still running on the original host. This means that processor features must be compatible. The
processors don-t need to run at the same speed or have the same amount of cache, but they must be
compatible.
To this end, it-s not possible to migrate workloads between processors of different vendors. So, you can-t
use DRS with a cluster that-s made of mi!ed &#D and .ntel servers. "owever, once a cluster has servers
with processors all from the same vendor, there are ways to make DRS work across processor
families/generations.
The easiest way to ensure ongoing processor compatibility in a cluster is to enable 0nhanced v#otion
ompatibility 10,2 for that cluster. 0, takes a 3lowest common denominator4 approach to compatibility.
0, identifies the processor family that is supported by all processors in the cluster and, for processors that
are newer or have additional features, 0, masks those features from use so that workloads can be
migrated between all processors. 5sing this method, all of the processors in the cluster are compatible for
DRS- purposes
$hen virtual machines are powered on in a DRS cluster, venter determines where the virtual
machines should be placed in order to balance resource usage across the cluster. The DRS
scheduler runs periodically to migrate virtual machines using v#otion, to maintain a balance of
resource usage across the cluster. &ffinity or anti6affinity rules can be used to control where ,#s
are placed within a cluster. &ffinity rules keep ,#s on the same physical host, reducing the load
on the physical network by keeping traffic between them from leaving the host. &nti6affinity rules
keep ,#s separated on different physical hosts, ensuring higher availability.
Affinity rules VM/VM
&t times, you need to ensure that multiple virtual machines are always running on the same host. &s such, if one of
the virtual machines is v#otioned to a different host, the associated virtual machines must be moved as well. The
scenario is common between, for e!ample, application and database servers where keeping communications
between the ,#s on the same host is preferable to having that communication traverse a network link.
These kinds of needs are addressed through the creation of affinity rules.
Anti-affinity rules VM/VM
)inally, there are times during which certain virtual machines should not run on the same host. )or e!ample, most
organi7ations want to make sure that at least one domain controller remains available at all times, so those
organi7ations will create ,# to ,# anti6affinity rules which state that these virtual machines are to run on different
hosts, even if performance would be better by combining them
Affinity rules ost/VM
.n other cases, it-s not important to maintain ,# to ,# communication, but you need to make sure that certain
workloads always run on the same host. #any companies, for e!ample, want to know on which host venter is
running or they may have an application running inside a virtual machine, but that application is tied via licensing
rules to the current vSphere host. &dministrators can create virtual machine to host affinity rules to make sure that
these virtual machines are never migrated to other hosts. 8f course, the downside here is that the failure of the host
will result in the workload going down as well.
For example, if you have four ESX hosts that have more RAM than other hosts you may
want to make sure your Exchange servers are always running on the hosts with the most
RAM available
Shares
Shares specify the relative importance of a VM or resource pool!. I.". If one
VM has twice as many shares as another, then it is entitled to twice as much
of the resource when contention occurs.
Shares can be set in a #igh, Medium, $ow, or %ustom. Which map relatively
to &'(')
o #igh * (+++ shares,%-., (+ shares,M/ of con0gured VM Memory
o Medium * )+++ shares,%-., )+ shares,M/ of con0gured VM Memory
o $ow * 1++ shares,%-., 1 shares,M/ of con0gured VM Memory.
o %ustom * speci0ed by the user * beware as VMs become powered on
and o2 this value stays the same.
Shares only make senses when applied at a sibling level. So a parent
container can be assigned a share, and all the child ob3ects are assigned
shares within it that correspond to their relative importance within the parent
container.
4pply only to powered on VMs
When a new VM is powered on, the relative priority of all other VMs that are
siblings will change.
5eservations
5eservations specify the guaranteed minimum allocation or resources for a
VM
6ou may only power on a VM if there is enough unreserved resources to meet
the VMs reservation.
The host will guarantee the reservation, even when contention occurs.
5eservations are speci0ed in concrete units and by default are set to +.
Limits
$imits specify the upper bound for %-., Memory, or storage I,7 that can be
allocated.
4 host can always allocate more resources than a VMs reservation, but never
more than a VMs limit, whether contention is occurring or not.
"8pressed in concrete .nits.
9efault is unlimited and in most cases there is no need to use this.
/ene0ts * does allow you to simulate having few resources or contention.
9rawbacks * could waste idle resources. 5esources can not be assigned
above a VMs limit even if they are available.
Create"Delete a DRS"#$ Cluster
95S %lusters are a collection of "S:i hosts with shared resources. 95S gives you
the following cluster level resource management capabilities.
$oad /alancing * the usage and distribution of %-. and memory amongst all
hosts and VMs is continuously monitored. 95S then compares this to an ideal
resource utili;ation given the attributes of the clusters resource pools and
VMs. It will compare the current demand and the imbalance target.
9epending on the settings it will then perform or recommend migrations to
migrate VMs to balance the load.
-ower Management * When 9-M is enabled, 95S will compare the total
resources of the cluster to the demands of the clusters VMs, including recent
history. If possible it will migrate VMs o2 of hosts in order to place them into
a standby power mode.
4<nity 5ules * 4llows you to control the placement of VMs to hosts by
assigning rules.
There are a few requirements before you can create a DRS cluster.
4ll hosts within the cluster need to be attached to shared storage
4ll volumes on the hosts must use the same volume names.
4ll processors must be from the same vendor class and the same processor
family. "V% will help to solve the feature di2erences between the family, but
processors must be of the same family.
4ll vMotion re=uirements must be met e8plained later!
Creating an HA/DRS Cluster
). 5ight click a 9atacenter ob3ect and select >?ew %luster>
(. @ive the cluster a name.
A. %heck whether to enable or disable #4 and,or 95S in the cluster.
&. Select an automation level for 95S.
o Manual * Initial placement and recommendations are both displayed
and will need to be approved.
o -artially 4utomated * Initial placement will be performed automatically,
migration will be displayed.
o Bully 4utomated * Initial placement and migration is fully automated.
1. Set the migration threshold -riority ) * -riority 1!
C. Select whether to enable 9-M and con0gure its settings
o 72 * no 9-M
o Manual * 7nly recommendations of power o2 and on are
recommended
o 4utomatic * v%enter will bring hosts in and out of standby according to
the threshold settings
D. Select whether to enable host monitoring for #4 or not this allows the hosts
to e8change their heartbeats!.
E. Select whether to enable or disable 4dmission control and set the desired
4dmission %ontrol -olicy.
o #osts Bailures the cluster tolerates * Speci0ed in the number of hosts.
o -ercentage of cluster resources reserved as failover spare capacity * F
for %-. and memory
o Specify failover hosts * specify host to use for #4 failover.
G. Specify the Virtual Machine %luster 9efaults
o VM restart priority * 9isabled, $ow, Medium, #igh
o #ost Isolation response * $eave -owered 7n, -ower 72, Shutdown
)+.Select the VM Monitoring Settings 9isabled, VM Monitoring, VM and
4pplication Monitoring! and the monitoring sensitivity $ow, Medium, #igh!
)).Select whether to enable or disable "V% and select its corresponding Mode.
)(.Select your swap 0le policy for VMs in the cluster.
o Store with Virtual Machine
o Store on a datastore speci0ed by host
Deleting an HA/DRS Cluster
). -retty "asy, right click on the cluster and select >5emove>

Add/Remove ESXi Hosts from a DRS/HA Cluster

The procedure for adding a host to an #4,95S %luster is di2erent for hosts
management by v%enter and those that are not. 4fter hosts have been added, the
VMs residing on those hosts are now part of the cluster and will be protected by #4
and migrated with 95S.

4dding a managed host
). Select the host and drag it into the target cluster ob3ect.
(. Select what to do with the VMs and resource pools that reside on the host.
o -ut this hosts VMs in the clusters root resource pool * v%enter will strip
the hosts of all of its resource pools and hierarchy and places all the
VMs into the clusters main root resource pool. Share allocations might
need to be manually changed after this since they are relative to
resource pools.
o %reate a resource pool for this hosts VMs and 5esource -ools * v%enter
will create a top level resource pools that becomes a direct child of the
cluster. 4ll of the hosts resource pools and VMs are then inserted into
this resource pool. 6ou can supply a name for the new resource pool
Adding an unmanaged host
). 5ight click the cluster and select 4dd #ost.
(. Supply the host name,I- and authentication credentials
A. 6ou are then presented with the same options as above regarding e8isting
VMs and resource pools.
Remoing a host from a cluster
There are certain !recautions to ta"e when remoing a host from a cluster and you must ta"e the
following into account
Resource #ool Hierarchies
When a host is removed, the host retains only its root resource pool. 4ll
resource pools created in the cluster are removed, even if you decided to
create one when 3oining the cluster
VMs * 4 host needs to be in maintenance mode to leave a cluster, thus all
VMs must be migrated o2 of the host.
Invalid %lusters * /y removing a host, you are decreasing the overall
resources the cluster has. If there are reservations set on the VMs you could
cause your cluster to me marked as yellow and an alarm to be triggered, you
could also a2ect #4 and failover capacity.
The !rocess to remoe a host is as follows
). -lace host in maintenance mode
(. ?ow you may either drag it to a di2erent location in the inventory, or right
click and select >5emove>.
$dd"Reove virtual ac!ines fro a DRS"#$ Cluster
4dding VMs to a cluster are performed in a few ways
When you add a host to a cluster, all VMs on the host are added as well
When a VM is created, the wi;ard will prompt you for the location to place it.
6ou can select a host, cluster, or resource pool within the cluster.
6ou can use the Migrate VM wi;ard to migrate a VM into a cluster, or simply
drag the VM into the clusters hierarchy.
Remoing a $% from a Cluster
When you remove a host from a cluster that contains powered o2 VMs, the
VMs are also removed from the cluster
.se the Migrate VM wi;ard to move the VM outside of the cluster. If the VM is
a member of 95S cluster rules group, a warning will be displayed but it will
not stop the migration.
Configure Storage DRS
Storage 95S is new to vSphere 1 and provides the following resource management
capabilities
Space .tili;ation $oad balancing * 4 threshold can be set for space use.
When usage e8ceeds this, S95S will generate recommendations or perform
migrations to balance the space
I,7 $atency load balancing * a threshold for latency can also be set to avoid a
bottleneck. S95S will again migrate VMs in order to alleviate the #igh I,7
4ntiH4<nity 5ules * 5ules can be created to separate disks of a VM on to
di2erent datastores.
Storage DRS is a!!lied to a datastore cluster& and then can be oerridden !er $%& 'ust as DRS is.
Again& 'ust as DRS does& SDRS !roides (nitial !lacement and ongoing balancing. SDRS is
ino"ed at a configured frequency )be default this is eery * hours+ or wheneer one or more of
the datastores within the cluster e,ceeds it-s s!ace threshold.
Storage DRS ma"es recommendations to enforce SDRS rules and balance s!ace and (/.. The
reason for the recommendations could either /alance datastore s!ace used or /alance datastore
(/. load. (n some cases& SDRS will ma"e mandatory recommendations such as The datastore is
out of s!ace& Anti0affinity or affinity rules are being iolated& or the datastore is entering
maintenance mode and must be eacuated.
Configuring SDRS
). In the datastores inventory, right click and select >?ew 9atastore %luster>
@ive the cluster a name and check the "nable S95S bo8.
(. Select your automation level ?o 4utomation, Bully 4utomated!.
A. Select your runtime rules. If you chose to enabled I,7 metrics for
recommendations, storage I,7 control will be enabled on all datastores in the
cluster. Set your utili;ed space and I,7 latency thresholds E+F utili;ed and
)1 ms latency by default!.
&. 6ou can also click advanced options and set a utili;ation di2erence threshold
between source and destination 1F default!, %heck fre=uency Ehrs default!,
and I,7 imbalance threshold aggressive * conservative!.
1. Select the hosts or clusters you wish to add the datastore cluster to.
C. Select the datastores you wish to include in the datastore cluster.
.nce SDRS is initially setu! if you right clic" on the datastore cluster and select -1dit Settings-
you will be !resented with some additional o!tions.
S95S Scheduling * .sed to change the thresholds and settings in order to
balance your datastores at a scheduled time.
5ules * 4<nity and 4ntiHa<nity rules to keep VM disks together or apart.
9one on a per VM basis.
Virtual Machine Settings * can change the automation level on a per VM
basis, as well as select whether to keep vmdk>s together or not.
Configure %n!anced vMotion Co&atibilit'
"nhanced vMotion %ompatibility "V%! is a feature that will hide or mask certain
%-. instructions from the %-.>s in all hosts in a cluster in order to improve %-.
compatibility between hosts, allowing for vMotion to occur. "V% leverages 4M9HV
"8tended Migration technology 4M9! and Intel Ble8Migration Intel! in order to
come up with a common baseline processor which in "V% terms is the "V% Mode.

In order to use "V%, hosts and VMs must meet the following re=uirements
4ll VMs in the cluster that are using a feature set greater than the target "V%
mode must be powered o2 or migrated out of the cluster before enabling "V%
4ll hosts must have %-.s from a single vendor
4ll hosts must be running "S:i! A.1 .( or higher
4ll hosts must be connected to v%enter
4ll hosts must have their advanced features enabled 4M9HV or Intel VT as
well as ?o "8ecute ?: or Intel e:ecute 9isable :9!
4ll hosts should be con0gured for vMotion
4ll hosts must have the supported %-.s for the mode you enable.
Create an 1$C Cluster
). %reate an empty cluster, enable "V% and select the desired "V% mode.
(. Select a host to move into the cluster
A. If the hosts feature set is greater than the "V% Mode then do the following
o -ower o2 the VMs on the host
o Migrate the VMs to another host
&. 9rag the host into the cluster
1nable 1$C on an e,isting cluster
). Select the cluster
(. If VMs are running on hosts that have feature sets greater than the desired
"V% Mode you must power them o2 or migrate them to another host,cluster
and then migrate them back after enabling.
A. "nsure the cluster has a standard vendor for %-. on its hosts.
&. "dit the cluster settings
1. -ower VMs back on and migrate back.
Changing 1$C %ode
(f you raise the mode& be sure all hosts su!!ort the new mode. $%s can continue running& but
they will not hae access to the new features aailable in the 1$C mode until they are !owered
off and bac" on. 2ust restarting the $% will not wor"& a full !ower cycle is required.
To lower the mode& you must !ower off $%s that are utili3ing a higher 1$C mode& change the
mode& and !ower them bac" on.
Monitor a DRS/HA Cluster

There are a few di2erent tabs in which you can monitor an #4,95S cluster when
selecting a cluster.

Summary Tab
@eneral bo8 shows
o 9isplays running status of #4,95S
o 9isplays "V% Mode
VMware #4 bo8 shows
o 4dmission %ontrol
o %urrent Bailover %apacity * number of hosts available for failover
o %on0gured Bailover %apacity * depends on admission control policy
selected
o Status of #ost,VM,4pplication Monitoring
o 4dvanced runtime info will show you the current slot si;e, the total
slots, used slots, available slots, failover slots, total powered on VMs,
total hosts, and total good hosts.
o %luster Status shows which host is the master and which are the
slaves, the number of protected and unprotected VMs, and which
datastores are being used for datastore heartbeating.
o %on0guration issues will display any con0guration issues with the
hosts.
vSphere 95S bo8 shows
o Migration 4utomation $evel
o 9-M 4utomation $evel
o %urrent number of 95S recommendations and faults
o Migration Threshold
o Target host load deviation and Standard host load deviation
o The resource distribution chart will show you the sum of VMs of %-.
and Memory utili;ation by host.
#4 and 95S will also trigger di2erent alerts across the top of the Summary
tab displaying alerts. In turn, it will Iag the host with either a warning or an
error.
DRS Tab
More detailed look at recommendations, faults, and history.
The ability to trigger 95S and apply recommendations
A cluster enabled for S!here HA will turn red when the number of $%s !owered on e,ceed the
failoer requirements. This only occurs if admission control is enabled. DRS will not be
affected by this.
Configure igration t!res!olds for DRS and virtual ac!ines
I e8plained the 95S portion of migration thresholds above. 6ou can however over
ride the automation levels of the cluster on a per VM basis, by setting the VMs
automation level to either 9isabled, 9efault inherit from cluster!, manual, partially
automated or fully automated.

Confgure automation levels for DRS and virtual machines

Whoops, 3ust mentioned this above.

Create VM-Host and VM-VM anit! rules

VMHVM 4<nity,4ntiH4<nity 5ules
speci0es whether VMs should run on the same host or be kept on
separate hosts.
Might want to keep VMs on the same host for performance reasons
Might want to keep VMs separated to ensure certain VMs remaining running if
one host fails.
If to VMHVM rules conIict with each other, the older rule will take precedence
over the newer one and the newer one will be disabled.
95S will also give higher precedence to preventing violation of antHa<nity
rules than that of a<nity.
$%0Host Affinity/Anti0Affinity Rules
Speci0es whether or not VMs in a VM 95S group should or shouldn>t run on
hosts in a host 95S group.
May want to keep certain VMs running on certain hosts due to licensing
issues.
7ptions to specify whether the rule is a hard rule must not,must run on
hosts! or a soft rule should,should not run on hosts!.
Ena"le/Disa"le Host Monitoring

#ost monitoring is one of the technologies that #4 uses to determine whether or not
a host is isolated. To enable and disable this is =uite simple and done through the
#4 settings of the cluster. Simply check,uncheck the #ost Monitoring checkbo8.

Ena"le/Confgure/Disa"le virtual machine and a##lication
monitoring

Virtual Machine Monitoring
4cts much like #4, however it will restart individual virtual machines if their
VMware tools heartbeats are not received within a set time.
"nabled,9isabled within the VM Monitoring section of the #4 con0guration
options on the cluster
Monitoring sensitivity is con0gurable as follows
o $ow * VM will restart if no heartbeat between host and VM within (
minutes. VM will restart A times every D days.
o Medium * no heartbeat for C+ seconds, A restarts within (& hrs.
o #igh * no heartbeat for A+ seconds, A restarts per hour.
o %ustom * allows you to customi;e interval, number of restarts and time
frame.
%an have a global cluster setting as well as a per VM setting
A!!lication monitoring
5estarts individual VMs if their VMware tools application heartbeats are not
received within a set time.
"nabled,9isabled within the VM Monitoring section of the #4 con0guration
options on the cluster
In order to use application monitoring, you must obtain the appropriate S9J
or use an application that supports VMware application monitoring and set it
up to send heartbeats.
9eployed on a per VM basis. I believe it uses the same monitoring sensibility
as VM Monitoring.
Confgure admission control for HA and virtual machines

4dmission control is used to ensure that su<cient resources are available in a
cluster to provide failover protection and ensure that virtual machines get their
reservations respected. 4dmission control con0guration could prevent you from
powering on a VM, migrating a VM into a cluster, or increasing the amount of
resources allotted to a VM. "ven when admission control is disabled, vSphere will
ensure that at least two hosts are powered on in a cluster, and that all VMs are able
to be consolidated on to a single host.

There are three types of 4dmission control policies that you can use for #4

#ost Bailures %luster Tolerates
Specify the number of hosts that a cluster can tolerate if they fail.
vSphere will reserve the re=uired resources to restart all the VMs on those
failed hosts.
It does this by
o %alculating a slot si;e * a slot is a logical representation of %-. and
memory for any powered on VM in the cluster. %-. is determined by
the largest reservation of any powered on VM. If there are no
reservations it uses a default value of A( Mh;. It calculates its memory
slot by obtaining the largest memory reservation plus overhead. ?o
default here.
o 9etermines the number of slots in the cluster
o 9etermines the current failover capacity of the cluster * the number of
hosts that can fail and still leave enough slots to satisfy all VMs
o 9etermines whether the current failover capacity is less than the
con0gured failover capacity. If it is, admission control will deny the
operation re=uested.
#ercentage of Cluster resources resered
#4 will reserve a speci0c percentage of cluster %-. and memory for recovery
of host failures
It does this by
o %alculating the total resource re=uirements for all powered on VMs
o calculates the total host resources available for VMs
o %alculates the current %-. failover capacity and current memory
failover capacity.
o 9etermines if the current %-. or current memory is less than the
con0gured capacity. If so, denies the operation.
S!ecify 4ailoer Hosts
-retty simple, you specify the hosts you want to use for failover
This host will then not be available to run VMs, it>s set a side for #4.
HA admission control is a com!licated thing& but easy to set u!. sim!ly select your !olicy from
the HA configuration o!tions in the cluster configuration.
Determine a##ro#riate failover methodolog! and re$uired
resources for an HA im#lementation

-olicies should be picked based on your availability needs and characteristics of
your cluster. 6ou should certainly consider the following

5esource Bragmentation
When there are enough resources available, but they are located on multiple
hosts, thus one host doesnKt have enough resources to run the VM.
The host failures cluster tolerates avoids this by using it>s slot mechanism.
The percentage policy does not since it>s looking at a percentage of resources
based on the cluster itself.
4le,ibility of 4ailoer Resource Reseration
#ost Bailures allows you specify number of hosts that can fail
-ercentage allows you to look at the cluster resources as a whole
Bailover hosts allows you to determine where and which hosts will be used.
Heterogeneity of Cluster
When using large virtual machines, the #ost Bailures cluster tolerates slot si;e
will be impacted and grow very large, thus giving you une8pected results,
especially if you use reservations.
The remaining two policies are not so much a2ected by the >monster VM>

You might also like