You are on page 1of 37

AHV Administration Guide

Acropolis 5.0
26-Jan-2017
Notice

Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere Web Client ESXi host root nutanix/4u

Copyright | AHV Administration Guide | AHV | 2


Interface Target Username Password

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: January 26, 2017 (2017-01-26 22:32:23 GMT-8)

Copyright | AHV Administration Guide | AHV | 3


Contents

1: Node Management...................................................................................5
Controller VM Access.......................................................................................................................... 5
Shutting Down a Node in a Cluster (AHV)......................................................................................... 5
Starting a Node in a Cluster (AHV).................................................................................................... 6
Changing CVM Memory Configuration (AHV).....................................................................................7
Changing the Acropolis Host Name.................................................................................................... 8
Changing the Acropolis Host Password..............................................................................................8
Upgrading the KVM Hypervisor to Use Acropolis Features................................................................ 9
Nonconfigurable AHV Components...................................................................................................11

2: Controller VM Memory Configurations............................................... 13


CVM Memory and vCPU Configurations (G5/Broadwell)..................................................................13
Platform Workload Translation (G5/Broadwell).......................................................................14
CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge).................................................... 14
CVM Memory Configurations for Features........................................................................................15

3: Host Network Management.................................................................. 17


Prerequisites for Configuring Networking.......................................................................................... 17
Recommendations for Configuring Networking in an Acropolis Cluster............................................17
Layer 2 Network Management with Open vSwitch........................................................................... 19
About Open vSwitch............................................................................................................... 19
Default Factory Configuration................................................................................................. 20
Viewing the Network Configuration.........................................................................................21
Creating an Open vSwitch Bridge.......................................................................................... 23
Configuring an Open vSwitch Bond with Desired Interfaces..................................................23
Virtual Network Segmentation with VLANs............................................................................ 24
Changing the IP Address of an Acropolis Host................................................................................ 27

4: Virtual Machine Network Management................................................28


Configuring 1 GbE Connectivity for Guest VMs................................................................................28
Configuring a Virtual NIC to Operate in Access or Trunk Mode....................................................... 29
Virtual Machine Memory and CPU Configurations............................................................................30
Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)..........................................31

5: Event Notifications................................................................................ 33
List of Events..................................................................................................................................... 33
Creating a Webhook.......................................................................................................................... 34
Listing Webhooks...............................................................................................................................35
Updating a Webhook......................................................................................................................... 36
Deleting a Webhook.......................................................................................................................... 37

4
1
Node Management

Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console or nCLI.
Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access
with password or key authentication. Some functions, however, require logging on to a Controller VM
with SSH. Exercise caution whenever connecting directly to a Controller VM as the risk of causing cluster
issues is increased.

Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.

Shutting Down a Node in a Cluster (AHV)


Before you begin: Shut down guest VMs that are running on the node, or move them to other nodes in
the cluster.
Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2
(RF2), you can only shut down one node for each cluster. If an RF2 cluster would have more than
one node shut down, shut down the entire cluster.

1. If the Controller VM is running, shut down the Controller VM.

a. Log on to the Controller VM with SSH.

b. Put the node into maintenance mode.


nutanix@cvm$ acli host.enter_maintenance_mode host [mode="{ live | cold |
power_off }" ] [use_ha_reservations="{ true | false }" ] [wait="{ true | false }" ]

c. Shut down the Controller VM.


nutanix@cvm$ cvm_shutdown -P now

2. Log on to the Acropolis host with SSH.

3. Shut down the host.


root@ahv# shutdown -h now

Node Management | AHV Administration Guide | AHV | 5


Starting a Node in a Cluster (AHV)

1. Log on to the Acropolis host with SSH.

2. Find the name of the Controller VM.


root@ahv# virsh list --all | grep CVM

Make a note of the Controller VM name in the second column.

3. Determine if the Controller VM is running.


If the Controller VM is off, a line similar to the following should be returned:
- NTNX-12AM2K470031-D-CVM shut off

Make a note of the Controller VM name in the second column.


If the Controller VM is on, a line similar to the following should be returned:
- NTNX-12AM2K470031-D-CVM running

4. If the Controller VM is shut off, start it.


root@ahv# virsh start cvm_name

Replace cvm_name with the name of the Controller VM that you found from the preceding command.

5. If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance
mode.
nutanix@cvm$ acli
<acropolis> host.exit_maintenance_mode host
<acropolis> exit

6. Log on to another Controller VM in the cluster with SSH.

7. Verify that all services are up on all Controller VMs.


nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in the cluster:

CVM: 10.1.64.60 Up
Zeus UP [3704, 3727, 3728, 3729, 3807, 3821]
Scavenger UP [4937, 4960, 4961, 4990]
SSLTerminator UP [5034, 5056, 5057, 5139]
Hyperint UP [5059, 5082, 5083, 5086, 5099, 5108]
Medusa UP [5534, 5559, 5560, 5563, 5752]
DynamicRingChanger UP [5852, 5874, 5875, 5954]
Pithos UP [5877, 5899, 5900, 5962]
Stargate UP [5902, 5927, 5928, 6103, 6108]
Cerebro UP [5930, 5952, 5953, 6106]
Chronos UP [5960, 6004, 6006, 6075]
Curator UP [5987, 6017, 6018, 6261]
Prism UP [6020, 6042, 6043, 6111, 6818]
CIM UP [6045, 6067, 6068, 6101]
AlertManager UP [6070, 6099, 6100, 6296]
Arithmos UP [6107, 6175, 6176, 6344]
SysStatCollector UP [6196, 6259, 6260, 6497]
Tunnel UP [6263, 6312, 6313]

Node Management | AHV Administration Guide | AHV | 6


ClusterHealth UP [6317, 6342, 6343, 6446, 6468, 6469, 6604, 6605,
6606, 6607]
Janus UP [6365, 6444, 6445, 6584]
NutanixGuestTools UP [6377, 6403, 6404]

Changing CVM Memory Configuration (AHV)


Before you begin: Perform these steps once for each Controller VM in the cluster if you need to change
the Controller VM memory allocation.

Caution: To avoid impacting cluster availability, shut down one Controller VM at a time. Wait until
cluster services are up before proceeding to the next Controller VM.

1. Log on to the Acropolis host with SSH.

2. Find the name of the Controller VM.


root@ahv# virsh list --all | grep CVM

Make a note of the Controller VM name in the second column.

3. Stop the Controller VM.


root@ahv# virsh shutdown cvm_name

Replace cvm_name with the name of the Controller VM that you found from the preceding command.

4. Check the current memory and CPU settings.


root@ahv# virsh dumpxml cvm_name | egrep -i "cpu|memory"

Replace cvm_name with the name of the Controller VM that you found in step 2.

5. Increase the memory of the Controller VM (if needed), depending on your configuration settings for
deduplication and other advanced features.
See CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge) on page 14 for memory sizing
guidelines.
root@ahv# virsh setmaxmem cvm_name --config --size ram_gbGiB
root@ahv# virsh setmem cvm_name --config --size ram_gbGiB

Replace cvm_name with the name of the Controller VM and ram_gb with the recommended amount
from the sizing guidelines in GiB (for example, 1GiB).

6. Start the Controller VM.


root@ahv# virsh start cvm_name

7. Log on to the Controller VM.


Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

8. Confirm that cluster services are running on the Controller VM.


nutanix@cvm$ cluster status

Node Management | AHV Administration Guide | AHV | 7


Changing the Acropolis Host Name
To change the name of an Acropolis host, do the following:

1. Log on to the Acropolis host with SSH.

2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/sysconfig/
network file.
HOSTNAME=my_hostname

Replace my_hostname with the name that you want to assign to the host.

3. Use the text editor to replace the host name in the /etc/hostname file.

4. Restart the Acropolis host.

Changing the Acropolis Host Password


Tip: Although it is not required for the root user to have the same password on all hosts, doing so
makes cluster management and support much easier. If you do select a different password for one
or more hosts, make sure to note the password for each host.
Perform these steps on every Acropolis host in the cluster.

1. Log on to the Acropolis host with SSH.

2. Change the root password.


root@ahv# passwd root

3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.

The password you choose must meet the following complexity requirements:
In configurations with high-security requirements, the password must contain:
At least 15 characters.
At least one upper case letter (AZ).
At least one lower case letter (az).
At least one digit (09).
At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
At least eight characters different from the previous password.
At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.

In configurations without high-security requirements, the password must contain:


At least eight characters.
At least one upper case letter (AZ).

Node Management | AHV Administration Guide | AHV | 8


At least one lower case letter (az).
At least one digit (09).
At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
At least three characters different from the previous password.
At most three consecutive occurrences of any given character.
The password cannot be the same as the last 10 passwords.

In both types of configuration, if a password for an account is entered three times unsuccessfully within
a 15-minute period, the account is locked for 15 minutes.

Upgrading the KVM Hypervisor to Use Acropolis Features


Before you begin:

Note: If you are currently deploying NOS 4.1.x/4.1.1.x and later, and previously upgraded to an
Acropolis-compatible version of the KVM hypervisor (for example, version KVM-20150120):
Do not use the script or procedure described in this topic.
Upgrade to the latest available Nutanix version of the KVM hypervisor using the Upgrade
Software feature through the Prism web console. See Software and Firmware Upgrades in the
Web Console Guide for the upgrade instructions.

Use this procedure if you are currently using a legacy, non-Acropolis version of KVM and want to use the
Acropolis distributed VM management service features. The first generally-available Nutanix KVM version
with Acropolis is KVM-2015120; the Nutanix support portal always makes the latest version available.

How to Check Your AHV Version

Use this procedure Result

Log in to the hypervisor host and type cat /etc/ For example, the following result indicates that you
nutanix-release are running an Acropolis-compatible hypervisor:
el6.nutanix.2015412. The minimum result for AHV
is el6.nutanix.20150120

Log in to the hypervisor host and type cat /etc/ For example, the following result indicates that your
centos-release are running an Acropolis-compatible hypervisor:
CentOS release 6.6 (Final). Any result that
returns CentOS 6.4 or previous is non-Acropolis
(that is, KVM).
Log in to the Prism web console View the Hypervisor Summary on the home page.
If it shows a version of 20150120 or later, you are
running AHV.

Node Management | AHV Administration Guide | AHV | 9


Upgrading the KVM Hypervisor to Use Acropolis

Current NOS and KVM Version Do This

NOS 3.5.5 and KVM CentOS 6.4 1. Upgrade KVM using the upgrade script.
2. Import existing VMs.

NOS 3.5.4.6 or earlier and KVM CentOS 6.4 1. Upgrade to NOS 3.5.5.
2. Upgrade KVM using the upgrade script.
3. Import existing VMs.

NOS 4.0.2/4.0.2.x and KVM CentOS 6.4 1. Upgrade KVM using the upgrade script.
2. Import existing VMs.

NOS 4.1 and KVM CentOS 6.4 1. Upgrade KVM using the upgrade script.
2. Import existing VMs.

Note:
See the Nutanix Support Portal for the latest information on Acropolis Upgrade Paths.
This procedure requires that you shut down any VMs running on the host and leave them off
until the hypervisor and the AOS upgrade is completed.
Do not run the upgrade script on the same Controller VM where you are upgrading the node's
hypervisor. You can run it from another Controller VM in the cluster.

1. Download the hypervisor upgrade bundle from the Nutanix support portal at the Downloads link.
You must copy this bundle to the Controller VM you are upgrading. This procedure assumes you copy it
to and extract it from the /home/nutanix directory.

2. Log on to the Controller VM of the hypervisor host to be upgraded to shut down each VM and shut
down the Controller VM.

a. Shut down each VM, specified by vm_name, running on the host to be upgraded.
nutanix@cvm$ virsh shutdown vm_name

b. Shut down the Controller VM once all VMs are powered off.
nutanix@cvm$ sudo shutdown -h now

3. Log on to a different Controller VM in the cluster with SSH.

4. Copy the upgrade bundle you downloaded to /home/nutanix and extract the upgrade tar file.
nutanix@cvm$ tar -xzvf upgrade_kvm-el6.nutanix.version.tar.gz

Download and extract the upgrade tar file.


This step assumes that you are performing the procedures from /home/nutanix.
nutanix@cvm$ curl -O http://download.nutanix.com/hypervisor/kvm/upgrade_kvm-
el6.nutanix.20150120.tar.gz
nutanix@cvm$ tar -xzvf upgrade_kvm-el6.nutanix.20150120.tar.gz

Node Management | AHV Administration Guide | AHV | 10


Note: You can also download this package from the Nutanix support portal from the
Downloads link.

The file creates and extracts to the upgrade_kvm directory.

5. Change to the upgrade_kvm/bin directory and run the upgrade_kvm upgrade script, where host_ip is the
IP address of the hypervisor host to be upgraded (the host where you have just shutdown the Controller
VM in Step 2).
nutanix@cvm$ cd upgrade_kvm/bin
nutanix@cvm$ ./upgrade_kvm --host_ip host_ip

The Controller VM of the upgraded host restarts and messages similar to the following are displayed.
This message shows the first generally-available KVM version with Acropolis (KVM-2015120).
...

2014-11-07 09:11:50 INFO host_upgrade_helper.py:1733 Found kernel


version: version_number.el6.nutanix.20150120.x86_64
2014-11-07 09:11:50 INFO host_upgrade_helper.py:1588 Current hypervisor version:
el6.nutanix.20150120
2014-11-07 09:11:50 INFO upgrade_kvm:161 Running post-upgrade
2014-11-07 09:11:51 INFO host_upgrade_helper.py:1716 Found upgrade marker:
el6.nutanix.20150120
2014-11-07 09:11:52 INFO host_upgrade_helper.py:1733 Found kernel
version: version_number.el6.nutanix.20150120
2014-11-07 09:11:52 INFO host_upgrade_helper.py:2036 Removing old kernel
2014-11-07 09:12:00 INFO host_upgrade_helper.py:2048 Updating release marker
2014-11-07 09:12:00 INFO upgrade_kvm:165 Upgrade complete

6. Log on to the upgraded Controller VM and verify that cluster services have started by noting that all
services are listed as UP .
nutanix@cvm$ cluster status

7. Repeat these steps for all hosts in the cluster.


Note: You need to upgrade the hypervisor for every host in your cluster before upgrading the
AOS/NOS on your cluster.

After the hypervisor is upgraded, you can now import any existing powered-off VMs according to
procedures described in the Acropolis App Mobility Fabric Guide.

Nonconfigurable AHV Components


The components listed here are configured by the Nutanix manufacturing and installation processes. Do
not modify any of these components except under the direction of Nutanix Support.

Warning: Modifying any of the settings listed here may render your cluster inoperable.

Warning: You must not run any commands on a Controller VM that are not covered in the Nutanix
documentation.

Nutanix Software

Settings and contents of any Controller VM, including the name and the virtual hardware configuration
(except memory when required to enable certain features)

Node Management | AHV Administration Guide | AHV | 11


AHV Settings

Hypervisor configuration, including installed packages


iSCSI settings
Open vSwitch settings
Taking snapshots of the Controller VM

Node Management | AHV Administration Guide | AHV | 12


2
Controller VM Memory Configurations
Controller VM memory allocation requirements differ depending on the models and the features that are
being used.

CVM Memory and vCPU Configurations (G5/Broadwell)


This topic lists the recommended Controller VM memory allocations for workload categories.

Note: Nutanix Engineering has determined that memory requirements for each Controller VM in
your cluster are likely to increase for subsequent releases. Nutanix recommends that you plan to
upgrade memory.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
Default configuration for all platforms 16 16 8
unless otherwise noted

The following table show the minimum amount of memory required for the Controller VM on each node for
platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (G5/Broadwell) on page 14.
Note: To calculate the number of vCPUs for your model, use the number of physical cores per
socket in your model. The minimum number of vCPUS your Controller VM can have is eight and
the maximum number is 12.
If your CPU has less than eight logical cores, allocate a maximum of 75 percent of the cores of a
single CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.

Nutanix Broadwell Models

The following table displays the categories for the platforms.


Platform Default Memory (GB)
VDI, server virtualization 16
Storage Heavy 24
Storage Only 24

Controller VM Memory Configurations | AHV Administration Guide | AHV | 13


Platform Default Memory (GB)
Large server, high-performance, all-flash 32

Platform Workload Translation (G5/Broadwell)


The following table maps workload types to the corresponding Nutanix and Lenovo models.

Workload Nutanix Lenovo


Features NX Model HX Model

VDI NX-1065S-G5 HX3310


SX-1065-G5 HX3310-F
NX-1065-G5 HX2310-E
NX-3060-G5 HX3510-G
NX-3155G-G5 HX3710
NX-3175-G5 HX3710-F
- HX2710-E

Storage Heavy NX-6155-G5 HX5510


NX-8035-G5 -
NX-6035-G5 -

Storage Only nodes NX-6035C-G5 HX5510-C

High Performance and All-Flash NX-8150-G5 HX7510


NX-9060-G5 -

CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge)


This topic lists the recommended Controller VM memory allocations for models and features.
Note: Nutanix Engineering has determined that memory requirements for each Controller VM in
your cluster are likely to increase for subsequent releases. Nutanix recommends that you plan to
upgrade memory.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
Default configuration for all platforms 16 16 8
unless otherwise noted

The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.

Controller VM Memory Configurations | AHV Administration Guide | AHV | 14


Nutanix Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
NX-1020 12 12 4
NX-6035C 24 24 8
NX-6035-G4 24 16 8
NX-8150 32 32 8
NX-8150-G4 32 32 8
NX-9040 32 16 8
NX-9060-G4 32 16 8

Dell Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)

XC730xd-24 32 16 8
XC6320-6AF
XC630-10AF

Lenovo Platforms

Platform Default Memory (GB) vCPUs

HX-3500 24 8
HX-5500
HX-7500

CVM Memory Configurations for Features


The following table lists the minimum amount of memory required when enabling features.
The memory size requirements are in addition to the default or recommended memory available for your
platform. The maximum additional memory required is 16 GB even if the total indicated for the features is
more than that.
Note: Total CVM memory required = recommended platform memory + memory required for each
enabled feature (max 16 GB)

Features Memory (GB)


Capacity tier deduplication (includes performance tier deduplication) 16
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4

Controller VM Memory Configurations | AHV Administration Guide | AHV | 15


Features Memory (GB)
Capacity tier deduplication + redundancy factor 3 16
Self-service portal (AHV only) Variable

Note:
SSP requires a minimum of 24 GB of memory for the CVM. If the CVMs
already have 24 GB of memory, no additional memory is necessary to run
SSP.
If the CVMs have less than 24 GB of memory, increase the memory to 24 GB
to use SSP.
If the cluster is using any other features that require additional CVM memory,
add 4 GB for SSP in addition to the amount needed for the other features.

Controller VM Memory Configurations | AHV Administration Guide | AHV | 16


3
Host Network Management
Network management in an Acropolis cluster consists of the following tasks:
Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you configure
bridges, bonds, and VLANs.
Optionally changing the IP address, netmask, and default gateway that were specified for the hosts
during the imaging process.

Prerequisites for Configuring Networking


Change the configuration from the factory default to the recommended configuration. See Default Factory
Configuration on page 20 and Recommendations for Configuring Networking in an Acropolis Cluster on
page 17.

Recommendations for Configuring Networking in an Acropolis Cluster


Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as
described in this documentation:
Viewing the network configuration
Configuring an Open vSwitch bond with desired interfaces
Assigning the Controller VM to a VLAN
For performing other OVS configuration tasks, such as adding an interface to a bridge and configuring
LACP for the interfaces in an OVS bond, log on to the AHV host, and then follow the procedures described
in the OVS documentation at http://openvswitch.org/.
Nutanix recommends that you configure the network as follows:

Recommended Network Configuration

Network Component Best Practice

Open vSwitch Do not modify the OpenFlow tables that are associated with the default
OVS bridge br0.

VLANs Add the Controller VM and the AHV host to the same VLAN. By default,
the Controller VM and the hypervisor are assigned to VLAN 0, which
effectively places them on the native VLAN configured on the upstream
physical switch.
Do not add any other device, including guest VMs, to the VLAN to which
the Controller VM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.

Host Network Management | AHV Administration Guide | AHV | 17


Network Component Best Practice

Virtual bridges Do not delete or rename OVS bridge br0.


Do not modify the native Linux bridge virbr0.

OVS bonded port (bond0) Aggregate the 10 GbE interfaces on the physical host to an OVS bond
on the default OVS bridge br0 and trunk these interfaces on the physical
switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode. LACP configurations are known to
work, but support might be limited.

1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make sure
interfaces (physical host) that the guest VMs do not use the VLAN over which the Controller VM and
hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity, follow
the hypervisor manufacturers switch port and networking configuration
guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge
br0, either individually or in a second bond. Use them on other bridges.

IPMI port on the hypervisor Do not trunk switch ports that connect to the IPMI interface. Configure the
host switch ports as access ports for management simplicity.

Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Use an 802.3-2012 standardscompliant switch that has a low-latency,
cut-through design and provides predictable, consistent traffic latency
regardless of packet size, traffic pattern, or the features enabled on
the 10 GbE interfaces. Port-to-port latency should be no higher than 2
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for
each port.

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.

Host Network Management | AHV Administration Guide | AHV | 18


Network Component Best Practice

Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.

This diagram shows the recommended network configuration for an Acropolis cluster. The interfaces in the
diagram are connected with colored lines to indicate membership to different VLANs:

Figure:

Layer 2 Network Management with Open vSwitch


AHV uses Open vSwitch to connect the Controller VM, the hypervisor, and the guest VMs to each other
and to the physical network. The OVS package is installed by default on each Acropolis node and the OVS
services start automatically when you start a node.
To configure virtual networking in an Acropolis cluster, you need to be familiar with OVS. This
documentation gives you a brief overview of OVS and the networking components that you need to
configure to enable the hypervisor, Controller VM, and guest VMs to connect to each other and to the
physical network.

About Open vSwitch


Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and designed to
work in a multiserver virtualization environment. By default, OVS behaves like a Layer 2 learning switch
that maintains a MAC address learning table. The hypervisor host and VMs connect to virtual ports on the
switch. Nutanix uses the OpenFlow protocol to configure and communicate with Open vSwitch.
Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch. As an
example, the following diagram shows OVS instances running on two hypervisor hosts.

Host Network Management | AHV Administration Guide | AHV | 19


Figure: Open vSwitch

Default Factory Configuration


The factory configuration of an Acropolis host includes a default OVS bridge named br0 and a native linux
bridge called virbr0.
Bridge br0 includes the following ports by default:
An internal port with the same name as the default bridge; that is, an internal port named br0. This is the
access port for the hypervisor host.
A bonded port named bond0. The bonded port aggregates all the physical interfaces available on the
node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces
are aggregated on bond0. This configuration is necessary for Foundation to successfully image the
node regardless of which interfaces are connected to the network.
Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with Desired
Interfaces on page 23.

The following diagram illustrates the default factory configuration of OVS on an Acropolis node:

Host Network Management | AHV Administration Guide | AHV | 20


Figure: Default factory configuration of Open vSwitch in AHV

The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to
bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.

Viewing the Network Configuration


Use the following commands to view the configuration of the network elements.
Before you begin: Log on to the Acropolis host with SSH.

To show interface properties such as link speed and status, log on to the Controller VM, and then list the
physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces

Output similar to the following is displayed:

name mode link speed


eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then
list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks

Replace bridge with the name of the bridge for which you want to view uplink information. Omit the --
bridge_name parameter if you want to view uplink information for the default OVS bridge br0.

Host Network Management | AHV Administration Guide | AHV | 21


Output similar to the following is displayed:

Uplink ports: bond0


Uplink ifaces: eth1 eth0

To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the
configuration of Open vSwitch.
root@ahv# ovs-vsctl show

Output similar to the following is displayed:

59ce3252-f3c1-4444-91d1-b5281b30cdba
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth3"
Interface "eth2"
Port "bond1"
Interface "eth1"
Interface "eth0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="1", remote_ip="192.0.2.131"}
ovs_version: "2.3.1"

To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the
configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name

For example, show the configuration of bond0.


root@ahv# ovs-appctl bond/show bond0

Output similar to the following is displayed:

---- bond0 ----


bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
active slave mac: 0c:c4:7a:48:b2:68(eth0)

slave eth0: enabled


active slave
may_enable: true

slave eth1: disabled


may_enable: false

Host Network Management | AHV Administration Guide | AHV | 22


Creating an Open vSwitch Bridge
To create an OVS bridge, do the following:

1. Log on to the Acropolis host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Create an OVS bridge on each host in the cluster.


nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br bridge'

Replace bridge with a name for the bridge. The output does not indicate success explicitly, so you can
append && echo success to the command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'

Output similar to the following is displayed:

nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
Executing ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success on the
cluster
================== 192.0.2.203 =================
FIPS mode initialized
Nutanix KVM
success
...

Configuring an Open vSwitch Bond with Desired Interfaces


When creating an OVS bond, you can specify the interfaces that you want to include in the bond.
Use this procedure to create a bond that includes a desired set of interfaces or to specify a new set of
interfaces for an existing bond. If you are modifying an existing bond, AHV removes the bond and then re-
creates the bond with the specified interfaces.

Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond, so
the disassociation is necessary to help prevent any unpredictable performance issues that might
result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that you
aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate OVS
bridge.
To create an OVS bond with the desired interfaces, do the following:

1. Log on to the Acropolis host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Create a bond with the desired set of interfaces.


nutanix@cvm$ manage_ovs --bridge_name bridge --interfaces interfaces --bond_name bond_name
update_uplinks

Host Network Management | AHV Administration Guide | AHV | 23


Replace bridge with the name of the bridge on which you want to create the bond. Omit the --
bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:
A comma-separated list of the interfaces that you want to include in the bond. For example,
eth0,eth1 .
A keyword that indicates which interfaces you want to include. Possible keywords:
10g. Include all available 10 GbE interfaces
1g. Include all available 1 GbE interfaces
all. Include all available interfaces
For example, create a bond with interfaces eth0 and eth1.
nutanix@cvm$ manage_ovs --bridge_name br1 --interfaces eth0,eth1 --bond_name bond1
update_uplinks

Example output similar to the following is displayed:

2015-03-05 11:17:17 WARNING manage_ovs:291 Interface eth1 does not have link state
2015-03-05 11:17:17 INFO manage_ovs:325 Deleting OVS ports: bond1
2015-03-05 11:17:18 INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1
2015-03-05 11:17:22 INFO manage_ovs:364 Sending gratuitous ARPs for 192.0.2.21

Virtual Network Segmentation with VLANs


You can set up a segmented virtual network on an Acropolis node by assigning the ports on Open vSwitch
bridges to different VLANs. VLAN port assignments are configured from the Controller VM that runs on
each node.
For best practices associated with VLAN assignments, see Recommendations for Configuring Networking
in an Acropolis Cluster on page 17. For information about assigning guest VMs to a VLAN, see the Web
Console Guide.

Assigning an Acropolis Host to a VLAN

To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:

1. Log on to the Acropolis host with SSH.

2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag

Replace host_vlan_tag with the VLAN tag for hosts.

3. Confirm VLAN tagging on port br0.


root@ahv# ovs-vsctl list port br0

4. Check the value of the tag parameter that is shown.

5. Verify connectivity to the IP address of the AHV host by performing a ping test.

Host Network Management | AHV Administration Guide | AHV | 24


Assigning the Controller VM to a VLAN

By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.

Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are
logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the
internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:

1. Log on to the Acropolis host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Assign the public interface of the Controller VM to a VLAN.


nutanix@cvm$ change_cvm_vlan vlan_id

Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10

Output similar to the following us displayed:

Replacing external NIC in CVM, old XML:


<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<source bridge="br0" />
<vlan>
<tag id="10" />
</vlan>
<virtualport type="openvswitch">
<parameters interfaceid="95ce24f9-fb89-4760-98c5-01217305060d" />
</virtualport>
<target dev="vnet0" />
<model type="virtio" />
<alias name="net2" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
</interface>

new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
<source bridge="br0" />
<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.

Configuring a Virtual NIC to Operate in Access or Trunk Mode

By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is

Host Network Management | AHV Administration Guide | AHV | 25


connected. If restricted to using access mode interfaces, a VM running an application on multiple VLANs
(such as a firewall application) must use multiple virtual NICsone for each VLAN. Instead of configuring
multiple virtual NICs in access mode, you can configure a single virtual NIC on the VM to operate in trunk
mode. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its
own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the
trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only
over its own VLAN.
To configure a virtual NIC as an access port or trunk port, do the following:

1. Log on to the Controller VM with SSH.

2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
network. Name of the virtual network to which you want to connect the virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode. Default: kAccess .

b. Configure an existing virtual NIC to operate in the required mode.


nutanix@cvm$ acli vm.nic_update vm mac_addr [update_vlan_trunk_info={true | false}]
[vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
mac_addr. MAC address of the virtual NIC to update (the MAC address is used to identify the
virtual NIC). Required to update a virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. If not specified, the
parameter defaults to false and the vlan_mode and trunked_networks parameters are ignored.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode.

Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the
"VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App Mobility
Fabric Guide.

Host Network Management | AHV Administration Guide | AHV | 26


Changing the IP Address of an Acropolis Host

Perform the following procedure to change the IP address of an Acropolis host.

Caution: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor
can be multihomed provided that one interface is on the same subnet as the Controller VM.

1. Edit the settings of port br0, which is the internal port on the default bridge br0.

a. Log on to the host console as root.


You can access the hypervisor host console either through IPMI or by attaching a keyboard and
monitor to the node.

b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0

c. Update entries for host IP address, netmask, and gateway.


The block of configuration information that includes these entries is similar to the following:
ONBOOT="yes"
NM_CONTROLLED="no"
PERSISTENT_DHCLIENT=1
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="br0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"

Replace host_ip_addr with the IP address for the hypervisor host.


Replace subnet_mask with the subnet mask for host_ip_addr.
Replace gateway_ip_addr with the gateway address for host_ip_addr.

d. Save your changes.

e. Restart network services.


/etc/init.d/network restart

2. Log on to the Controller VM and restart genesis.


nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:

Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]


Genesis started on pids [30378, 30379, 30380, 30381, 30403]

For information about how to log on to a Controller VM, see Controller VM Access on page 5.

3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
Acropolis Host to a VLAN on page 24.

Host Network Management | AHV Administration Guide | AHV | 27


4
Virtual Machine Network Management
Virtual machine network management involves configuring connectivity for guest VMs through Open
vSwitch bridges.

Configuring 1 GbE Connectivity for Guest VMs


If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0
and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign
guest VM interfaces to the network.
To configure 1 GbE connectivity for guest VMs, do the following:

1. Log on to the Acropolis host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Determine the uplinks configured on the host.


nutanix@cvm$ allssh manage_ovs show_uplinks

Output similar to the following is displayed:

Executing manage_ovs show_uplinks on the cluster


================== 192.0.2.49 =================
Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

================== 192.0.2.50 =================


Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

================== 192.0.2.51 =================


Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the
previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond
name are br0 and br0-up , respectively.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up
update_uplinks'

The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

Virtual Machine Network Management | AHV Administration Guide | AHV | 28


5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1 .
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'

6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to
a bond named br1-up .
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up
update_uplinks'

7. Log on to each Controller VM and create a network on a separate VLAN for the guest VMs, and
associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN
10 .
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1

8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on
the guest VMs to the network.
For information about assigning guest VM interfaces to a network, see "Creating a VM" in the Prism
Web Console Guide.

Configuring a Virtual NIC to Operate in Access or Trunk Mode


By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is
connected. If restricted to using access mode interfaces, a VM running an application on multiple VLANs
(such as a firewall application) must use multiple virtual NICsone for each VLAN. Instead of configuring
multiple virtual NICs in access mode, you can configure a single virtual NIC on the VM to operate in trunk
mode. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its
own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the
trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only
over its own VLAN.
To configure a virtual NIC as an access port or trunk port, do the following:

1. Log on to the Controller VM with SSH.

2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
network. Name of the virtual network to which you want to connect the virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode. Default: kAccess .

b. Configure an existing virtual NIC to operate in the required mode.


nutanix@cvm$ acli vm.nic_update vm mac_addr [update_vlan_trunk_info={true | false}]
[vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]

Virtual Machine Network Management | AHV Administration Guide | AHV | 29


Specify appropriate values for the following parameters:
vm. Name of the VM.
mac_addr. MAC address of the virtual NIC to update (the MAC address is used to identify the
virtual NIC). Required to update a virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. If not specified, the
parameter defaults to false and the vlan_mode and trunked_networks parameters are ignored.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode.

Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the
"VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App Mobility
Fabric Guide.

Virtual Machine Memory and CPU Configurations


Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the memory
allocation and the number of CPUs on your VMs while the VMs are powered on. You can change the
number of vCPUs (sockets) while the VMs are powered on. However, you cannot change the number of
cores per socket while the VMs are powered on.
You can change the memory and CPU configuration of your VMs only through the Acropolis CLI (aCLI).
Following is the supportability matrix of operating systems on which the memory and CPUs are hot-
pluggable.

No. Operating Edition Bits Hot-pluggable Hot-pluggable


Systems Memory CPU

1 Windows Server Standard x86 No No


2003

2 Windows Server Datacenter x86 Yes No


2008

3 Windows Server Standard x86_64 No No


2008 R2
4 Windows Server Datacenter x86_64 Yes Yes
2008 R2

5 Windows Server Standard x86_64 Yes No


2012 R2

6 Windows Server Datacenter x86_64 Yes No


2012 R2

7 CentOS 6.3+ x86 No Yes

8 CentOS 6.3+ x86_64 Yes Yes

Virtual Machine Network Management | AHV Administration Guide | AHV | 30


No. Operating Edition Bits Hot-pluggable Hot-pluggable
Systems Memory CPU

9 CentOS 6.8 No Yes

10 CentOS 6.8 x86_64 Yes Yes

11 CentOS 7.2 x86_64 Yes Yes

12 Suse Linux 11-SP3+ x86_64 No Yes


Enterprise
Edition

13 Suse Linux 12-SP1+ x86_64 Yes Yes


Enterprise
Edition

Memory OS Limitations

1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the
memory is not online, you cannot use the new memory. Perform the following procedure to make the
memory online.
a. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/device/system/memory/memoryXXX/state

Display the state of a specific memory block.


$ grep line /sys/devices/system/memory/*/state
b. Make the memory online.
$ echo online > /sys/devices/system/memory/memoryXXX/state

2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory
to that VM so that the final memory is greater than 3 GB, results in a memory-overflow condition. To
resolve the issue, restart the guest OS (CentOS 7.2) with the following setting:
swiotlb=force

CPU OS Limitations

1. On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might
have to bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU
online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online

Replace <n> with the number of the hot plugged CPU.


2. Device Manager on some versions of Windows such as Windows Server 2012 R2 displays the hot-
plugged CPUs as new hardware, but the hot-plugged CPUs are not displayed under Task Manager.

Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)


Perform the following procedure to hot plug the memory and CPUs on the AHV VMs.

1. Log on the Controller VM with SSH.

Virtual Machine Network Management | AHV Administration Guide | AHV | 31


2. Update the memory allocation for the VM.
nutanix@cvm$ acli vm.update vm memory=new_memory_size

Replace vm with the name of the VM and new_memory_size with the memory size.

3. Update the number of CPUs on the VM.


nutanix@cvm$ acli vm.update vm num_vcpus=n

Replace vm with the name of the VM and n with the number of CPUs.

Virtual Machine Network Management | AHV Administration Guide | AHV | 32


5
Event Notifications
You can register webhook listeners with the Nutanix event notification system by creating webhooks on the
Nutanix cluster. For each webhook listener, you can specify the events for which you want notifications to
be generated. Multiple webhook listeners can be notified for any given event. The webhook listeners can
use the notifications to configure services such as load balancers, firewalls, TOR switches, and routers.
Notifications are sent in the form of a JSON payload in an HTTP POST request, enabling you to send them
to any endpoint device that can accept an HTTP POST payload at a URL.
For example, if you register a webhook listener and include VM migration as an event of interest, the
Nutanix cluster sends the specified URL a notification whenever a VM migrates to another host.
You register webhook listeners by using the Nutanix REST API, version 3.0. In the API request, you specify
the events for which you want the webhook listener to receive notifications, the listener URL, and other
information such as a name and description for the webhook.

List of Events
You can register webhook listeners to receive notifications for events described here.

Virtual Machine Events

Event Description
VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
VM.ON A VM is powered on.
VM.OFF A VM is powered off.
VM.NIC_PLUG A virtual NIC is plugged into a network.
VM.NIC_UNPLUG A virtual NIC is unplugged from a network.

Virtual Network Events

Event Description
NETWORK.CREATE A virtual network is created.
NETWORK.DELETE A virtual network is deleted.
NETWORK.UPDATE A virtual network is updated.

Event Notifications | AHV Administration Guide | AHV | 33


Creating a Webhook
Send the Nutanix cluster an HTTP POST request whose body contains the information essential to
creating a webhook (the events for which you want the listener to receive notifications, the listener URL,
and other information such as a name and description of the listener).

Note: Each POST request creates a separate webhook with a unique UUID, even if the data
in the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not
send another request with changes. Instead, update the webhook. See Updating a Webhook on
page 36.
To create a webhook, send the Nutanix cluster an API request of the following form:

POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}

Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
name. Name for the webhook.
post_url. URL at which the webhook listener receives notifications.
username and password. User name and password to use for authentication.
events_filter_list. Comma-separated list of events for which notifications must be generated.
description. Description of the webhook.
api_version. Version of Nutanix REST API in use.

Event Notifications | AHV Administration Guide | AHV | 34


The following sample API request creates a webhook that generates notifications when VMs are
powered on and powered off:
POST https://192.0.2.3:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "vm_notifications_webhook",
"resources": {
"post_url": "http://192.0.2.10:8080/",
"credentials": {
"username":"admin",
"password":"nutanix/4u"
},
"events_filter_list": [
"VM.ON", "VM.OFF", "VM.UPDATE", "VM.CREATE", "NETWORK.CREATE"
]
},
"description": "Notifications for VM events."
},
"api_version": "3.0"
}

The Nutanix cluster responds to the API request with a 200 OK HTTP response that contains the UUID
of the webhook that is created. The following response is an example:
{
"status": {
"state": "kPending"
},
"spec": {
. . .
"uuid": "003f8c42-748d-4c0b-b23d-ab594c087399"
}
}

The notification contains metadata about the entity along with information about the type of event that
occurred. The event type is specified by the event_type parameter.

Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created successfully.
To list webhooks, do the following:

To show a single webhook, send the Nutanix cluster an API request of the following form:
https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid

Replace cluster_IP_address with the IP address of the Nutanix cluster. Replace webhook_uuid with the
UUID of the webhook that you want to show.

Event Notifications | AHV Administration Guide | AHV | 35


To list all the webhooks configured on the Nutanix cluster, send the Nutanix cluster an API request of
the following form:
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks/list
{
"filter": "string",
"kind": "webhook",
"sort_order": "ASCENDING",
"offset": 0,
"total_matches": 0,
"sort_column": "string",
"length": 0
}

Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
filter. Filter to apply to the list of webhooks.
sort_order. Order in which to sort the list of webhooks. Ordering is performed on webhook names.
offset.
total_matches. Number of matches to list.
sort_column. Parameter on which to sort the list.
length.

Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update the name,
listener URL, event list, and description.
To update a webhook, send the Nutanix cluster an API request of the following form:

PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid

{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}

Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of
the webhook you want to update, respectively. For a description of the parameters, see Creating a
Webhook on page 34.

Event Notifications | AHV Administration Guide | AHV | 36


Deleting a Webhook

To delete a webhook, send the Nutanix cluster an API request of the following form:

DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid

Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of the
webhook you want to update, respectively.

Event Notifications | AHV Administration Guide | AHV | 37

You might also like