You are on page 1of 85

Field Installation Guide

Foundation 3.0
11-Aug-2016
Notice

Copyright
Copyright 2016 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere Web Client ESXi host root nutanix/4u

Copyright | Field Installation Guide | Foundation | 2


Interface Target Username Password

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: August 11, 2016 (2016-08-11 3:17:16 GMT-7)

Copyright | Field Installation Guide | Foundation | 3


Contents
Release Notes................................................................................................................... 6

1: Field Installation Overview................................................................... 10

2: Creating a Cluster................................................................................. 11
Discovering the Nodes.......................................................................................................................11
Defining the Cluster........................................................................................................................... 14
Setting Up the Nodes........................................................................................................................ 16
Selecting the Images......................................................................................................................... 17
Creating the Cluster...........................................................................................................................20
Configuring a New Cluster................................................................................................................ 22

3: Imaging Bare Metal Nodes................................................................... 24


Preparing Installation Environment....................................................................................................25
Preparing a Workstation......................................................................................................... 25
Setting Up the Network...........................................................................................................29
Configuring Global Parameters......................................................................................................... 31
Configuring Node Parameters........................................................................................................... 34
Configuring Image Parameters..........................................................................................................38
Configuring Cluster Parameters........................................................................................................ 39
Monitoring Progress........................................................................................................................... 41
Cleaning Up After Installation............................................................................................................44

4: Downloading Installation Files.............................................................45


Foundation Files.................................................................................................................................46
Phoenix Files......................................................................................................................................47

5: Hypervisor ISO Images......................................................................... 49

6: Network Requirements......................................................................... 51

7: Controller VM Memory Configurations............................................... 54


Controller VM Memory and vCPU Configurations.............................................................................54
Controller VM Memory and vCPU Configurations (Broadwell/G5)....................................................55
Platform Workload Translation (Broadwell/G5)................................................................................. 56

8: Hyper-V Installation Requirements......................................................57

9: Setting IPMI Static IP Address.............................................................61

4
10: Troubleshooting...................................................................................63
Fixing IPMI Configuration Problems.................................................................................................. 63
Fixing Imaging Problems................................................................................................................... 64
Frequently Asked Questions (FAQ)...................................................................................................65

11: Appendix: Imaging a Node (Phoenix)................................................69


Summary: Imaging Nutanix NX Series Nodes.................................................................................. 69
Summary: Imaging Lenovo Converged HX Series Nodes................................................................ 70
Preparing the Controller VM ISO Image........................................................................................... 70
Nutanix NX Series Platforms.............................................................................................................71
Installing a Hypervisor (Nutanix NX Series Platforms)........................................................... 71
Installing ESXi (Nutanix NX Series Platforms)....................................................................... 73
Installing Hyper-V (Nutanix NX Series Platforms).................................................................. 74
Installing AHV (Nutanix NX Series Platforms)........................................................................77
Attaching the Controller VM Image (Nutanix NX Series Platforms)....................................... 77
Lenovo Converged HX Series Platforms.......................................................................................... 80
Attaching an ISO Image (Lenovo Converged HX Series Platforms)...................................... 80
Installing ESXi (Lenovo Converged HX Series Platforms)..................................................... 82
Installing AHV (Lenovo Converged HX Series Platforms)......................................................83
Installing the Controller VM............................................................................................................... 83

5
Release Notes
The Foundation release notes provide brief, high-level descriptions of changes, enhancements, notes,
and cautions as applicable to various releases of Foundation software and to the use of the software with
specific hardware platforms. Where applicable, the description includes a solution or workaround.

Foundation Release 3.0.6

Foundation 3.0.6 is a patch release included with AOS 4.5.3. It resolves cluster creation issues
experienced when running Foundation 3.0.5 on a Controller VM. For the standalone imaging mode,
Foundation 3.1.1 remains the recommended version.
This release includes the following enhancements and changes:
Foundation supports NX Broadwell platforms.
On Broadwell and newer platforms, Controller VM vCPUs are assigned to a single NUMA node for
consistent performance.
The size of a NUMA node determines how many vCPUs are assigned to a Controller VM. [ENG-15638,
ENG-39813]
This release includes fixes for the following issues:
Controller VMbased Foundation fails when validating the network. The network validation progress
indicator remains at 0%. [ENG-52514]
Foundation fails during cluster creation if you specify a redundancy factor of 3. [ENG-50456]
Foundation fails if you attempt to install AHV and AOS 4.6 or later on a Nutanix cluster that is running a
version of AOS earlier than 4.6. The failure occurs when Foundation is passing control to the node that
it imaged first. [ENG-52129]
Foundation fails to create a cluster after imaging Dell XC series nodes with Hyper-V. [ENG-52593]
After the imaging process is complete, the cluster fails to start because the Foundation user interface
passes the node model as an imaging parameter. [ENG-50185]
The first-boot configuration script for ESXi does not set the active network interface card and port group
correctly if the nodes have both a 10GBase-T (Intel X540) card and a 10GBase SFP+ (Intel 82599)
card. Examples of hardware models that have both these cards are the NX-8150-G4 and NX-8150-
G5 platforms because they have an on-board 10GBase-T card and can accommodate an optional
10GBase SFP+ (dual-port or quad-port) as an add-on card.
You can work around the issue by doing one of the following:
Disconnect the onboard NIC that is connected to the local switch and then use Foundation.
Use Foundation 3.2.1.
[ENG-55973]

Foundation Release 3.0.5

This release includes the following enhancements and changes:


Some issues in supporting Lenovo models were fixed [ENG-45886].
When upgrading a standalone Foundation install to 3.0.x, change the foundation_workflow value in
config/foundation_settings.json from " cvm " to " ipmi " and then restart the service. If this is not done,
the Foundation GUI will display the Controller VM-based cluster creation page and spin on discovery for
a long time before returning nothing.
Foundation does not recognize operating system installers with an uppercase file extension.
If using an installer with anything other than a .iso or .tar.gz extension, for example
SW_DVD9_Windows_Svr_Std_and_DataCtr_2012_R2_64Bit_English_-4_MLF_X19-82891.ISO, rename the file
to make the extension lowercase.

Release Notes | Field Installation Guide | Foundation | 6


For Lenovo platforms, download and use the Foundation package named foundation-3.0.5-HX-
series-caa59cf2.tar.gz. Use the Foundation package named foundation-3.0.5.tar.gz only for
Nutanix and Dell platforms. Both packages are available on the Foundation downloads page of the
Nutanix Support Portal.

Foundation Release 3.0.4

This release includes the following enhancements and changes:

Note: Foundation 3.0.3 was not released externally.

Support for Lenovo models HX3500, HX5500, and HX7500.


Support for host VLAN tags for BIOS upgrade.
Foundation is upgraded to version 3.0.4 automatically (no need for manual install) when AOS is
upgraded to version 4.5.1.2.
To upgrade prior versions of Foundation to 3.0.4, download the Foundation upgrade bundle
(foundation-version#.tar.gz) from the support portal to a convenient location and then enter the
following command on each of the Controller VMs in the cluster:
nutanix@cvm$ ~/foundation/bin/foundation_upgrade -t <path>/foundation-version#.tar.gz

Foundation Release 3.0.2

This release includes the following enhancements and changes:


Support for the NX-8150-G4 and NX-3155G-G4 models.
Support for AOS 4.5.1.
Support for Lenovo integration.
Support for enabling BIOS/BMC upgrade in combination with AOS 4.5.1 and NOS 4.1.6.
If you are deploying this build on a standalone VM, change the foundation_workflow value in config/
foundation_settings.json from cvm to ipmi and then restart the service. (This applies to both
Foundation 3.0.1 and 3.0.2.)
If you are updating a previously-deployed Foundation VM, enter the following command:
$ sudo yum -y install sshpass

If you use the 3.0.2 VM image, you can skip this step.

Foundation Release 3.0.1

This release includes the following enhancements and changes:


Hyper-V imaging now works properly.
Network validation no longer throws an error after correcting a network problem.
To upgrade from Foundation 3.0 to 3.0.1, do the following:
Controller VM-based Foundation:
1. Log in to a Controller VM through SSH.
2. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal
to a convenient location.
3. Enter the following commands:
nutanix@cvm$ pkill foundation
nutanix@cvm$ cd /home/nutanix/foundation/bin
nutanix@cvm$ ./foundation_upgrade -t <path>/foundation-version#.tar.gz
nutanix@cvm$ cd /home/nutanix
nutanix@cvm$ genesis restart

Release Notes | Field Installation Guide | Foundation | 7


Standalone Foundation:
1. Open the Foundation VM on your workstation.
2. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal
to a convenient location.
3. Enter the following commands:
$ pkill foundation
$ cd /home/nutanix/foundation/bin
$ ./foundation_upgrade -t <path>/foundation-version#.tar.gz
$ sudo service foundation_service restart

Foundation Release 3.0

This release includes the following enhancements and changes:


A major new implementation that allows for node imaging and cluster creation through the Controller
VM for factory-prepared nodes on the same subnet (see Creating a Cluster on page 11). This
process significantly reduces network complications and simplifies the workflow. (The existing workflow
remains for imaging bare metal nodes.) The new implementation includes the following enhancements:
A Java applet that automatically discovers factory-prepared nodes on the subnet and allows you to
select the first one to image.
A simplified GUI to select and configure the nodes, define the cluster, select the hypervisor and AOS
versions to use, and monitor the imaging and cluster creation process.
Customers may create a cluster using the new Controller VM-based implementation in Foundation 3.0.
Imaging bare metal nodes is still restricted to Nutanix sales engineers, support engineers, and partners.
The new implementation is incorporated in AOS 4.5 to allow for node imaging when adding nodes to an
existing cluster through the Prism GUI.
The cluster creation workflow does not use IPMI, and for both cluster creation and bare-metal imaging,
the host operating system install is done within an "installer VM" in Phoenix.
To see the progress of a host operating system installation, point a VNC console to the hypervisor host
IP address of the target node at port 5901.
Foundation no longer offers the option to run diagnostics.py as a post-imaging test. Should you wish to
run this test, you can download it from the Tools & Firmware page on the Nutanix support portal.
There is no Foundation upgrade path to the new Controller VM implementation; you must download the
Java applet from the Foundation 3.0 download page on the support portal. However, you can upgrade
standalone (bare metal workflow) Foundation 2.x to 3.0 as follows:
1. Copy the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal to /
home/nutanix in your VM.
2. If you want to preserve the existing ISO files (the contents of the isos and nos directories), enter the
following commands:
$ cd /home/nutanix
$ cp -r foundation/isos .
$ cp -r foundation/nos .

Release Notes | Field Installation Guide | Foundation | 8


3. Enter the following commands:
$ cd /home/nutanix # If not already there
$ pkill -9 foundation && pkill -9 iso_patcher
$ sudo fusermount -uz foundation/tmp/fuse # It is okay if this complains at you
$ rm -rf foundation
$ tar xf foundation-version#.tar.gz
$ cd /etc/init.d
$ sudo rm foundation_service
$ sudo ln -s /home/nutanix/foundation/bin/foundation_service
$ sudo yum -y install libunwind-devel
[if step 2 done] $ cd /home/nutanix
# Save (include) any new AHV ISOs in release
$ mv foundation/isos/hypervisor/kvm/* isos/hypervisor/kvm/
# Restore backed up ISO files to usual location
$ mv isos foundation/isos
$ mv nos foundation/nos
$ sudo service foundation_service restart

Hardware PlatformSpecific Notes

The following notes apply to the use of Foundation 3.1.x with specific hardware platforms:
NX-3175-G4
One or more of the following issues result from the use of an unsupported 1000BASE-T Copper
SFP transceiver module (SFP-to-RJ45 adapter) when imaging NX-3175-G4 nodes:
Foundation times out with the following message in node logs: INFO: Populating firmware
information for device bmc...
Foundation fails at random stages.
Foundation cannot communicate with the baseboard management controller (BMC).
To avoid encountering these issues, use a supported SFP-to-RJ45 adapter. For information about
supported adapters, see KB2422.

Release Notes | Field Installation Guide | Foundation | 9


1
Field Installation Overview
Nutanix installs the Acropolis hypervisor (AHV) and the Nutanix Controller VM at the factory before
shipping a node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes or to use
any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides step-by-
step instructions on how to use the Foundation tool to do a field installation, which consists of installing
a hypervisor and the Nutanix Controller VM on each node and then creating a cluster. You can also use
Foundation to create just a cluster from nodes that are already imaged or to image nodes without creating
a cluster.
Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster
from those nodes. Use the Prism web console (in clusters running AOS 4.5 or later) to image
factory-prepared nodes and then add them to an existing cluster. See the "Expanding a Cluster"
section in the Web Console Guide for this procedure.
A field installation can be performed for either factory-prepared nodes or bare metal nodes.
See Creating a Cluster on page 11 to image factory-prepared nodes and create a cluster from those
nodes (or just create a cluster for nodes that are already imaged).
See Imaging Bare Metal Nodes on page 24 to image bare metal nodes and optionally configure
them into one or more clusters.

Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix
hardware models with some restrictions. Click here (or log into the Nutanix support portal and
select Documentation > Compatibility Matrix from the main menu) for a list of supported
configurations. To check a particular configuration, go to the Filter By fields and select the
desired model, AOS version, and hypervisor in the first three fields and then set the last field to
Foundation. In addition, check the notes at the bottom of the table.

Field Installation Overview | Field Installation Guide | Foundation | 10


2
Creating a Cluster
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered
nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes on
the same subnet that are not part of a cluster currently. This procedure runs the Foundation tool through
the Nutanix Controller VM.
Note: This method creates a single cluster from discovered nodes. This method is limited to
factory prepared nodes running AOS 4.5 or later. If you want to image discovered nodes without
creating a cluster, image factory prepared nodes running an earlier AOS (NOS) version, or image
bare metal nodes, see Imaging Bare Metal Nodes on page 24.

To image the nodes and create a cluster, do the following:

1. Download the required files, start the cluster creation GUI, and run discovery (see Discovering the
Nodes on page 11).

2. Define cluster parameters; specify Controller VM, hypervisor, and (optionally) IPMI global network
addresses; and (optionally) enable health tests after the cluster is created (see Defining the Cluster on
page 14).

3. Configure the discovered nodes (Setting Up the Nodes on page 16).

4. Select the AOS and hypervisor images to use (see Selecting the Images on page 17).

5. Start the process and monitor progress as the nodes are imaged and the cluster is created (see
Creating the Cluster on page 20).

6. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster on
page 22).

Discovering the Nodes


Before you begin:
Physically install the Nutanix nodes at your site. See the Physical Installation Guide for your model type
for installation instructions.

Note: If you have nodes running a pre-4.5 version of AOS, you cannot use this (Controller VM-
based) method to create a cluster. Contact Nutanix customer support for help in creating the
cluster using the standalone (bare metal) method.
Your workstation must be connected to the network on the same subnet as the nodes you want to
image. (Foundation does not require an IPMI connection or any special network port configuration to
image discovered nodes.) See Network Requirements on page 51 for general information about the
network topology and port access required for a cluster.
Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP
address), and node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed
for installation.

Creating a Cluster | Field Installation Guide | Foundation | 11


Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.

1. Open a browser, go to the Nutanix support portal (see Downloading Installation Files on page 45),
and download the following files to your workstation.
FoundationApplet-offline.zip installation bundle from the Foundation download page.

Note: If you install from a workstation that has Internet access, you can forego downloading
this bundle and simply select the link to nutanix_foundation_applet.jnlp directly from the
support portal (see step 2). Otherwise, you must first download (and unpack) the installation
bundle.
nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page. This is the
installation bundle used for imaging the desired AOS release.
hypervisor ISO if installing Hyper-V or ESXi. (An AHV ISO is included with Foundation.) The user
must provide the supported Hyper-V or ESXi ISO (see Hypervisor ISO Images on page 49);
Hyper-V and ESXi ISOs are not available from the support portal.

2. Do one of the following:


Click the link to jnlp from online bundle link on the Nutanix support portal to download and start
the Java applet.
Unzip the FoundationApplet-offline.zip installation bundle that was downloaded in step 1 and then
start (double-click) the nutanix_foundation_applet.jnlp Java applet.
The discovery process begins and a window appears with a list of discovered nodes.

Note: A security warning message may appear indicating this is from an unknown source.
Click the accept and run buttons to run the application.

Figure: Foundation Launcher Window

3. Select (click the line for) a node to be imaged from the list and then click the Launch Foundation
button.
This launches the cluster creation GUI. The selected node will be imaged first and then be used to
image the other nodes. Only nodes with a status field value of Free can be selected, which indicates

Creating a Cluster | Field Installation Guide | Foundation | 12


it is not currently part of a cluster. A value of Unavailable indicates it is part of an existing cluster or
otherwise unavailable. To rerun the discovery process, click the Retry discovery button.
Note: A warning message may appear stating this is not the highest available version of
Foundation found in the discovered nodes. If you select a node using an earlier Foundation
version (one that does not recognize one or more of the node models), installation may fail
when Foundation attempts to image a node of an unknown model. Therefore, select the
node with the highest Foundation version among the nodes to be imaged. (You can ignore
the warning and proceed if you do not intend to select any of the nodes that have the higher
Foundation version.)

Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that
are not part of a cluster) and then displays information about the discovered blocks and nodes in the
Discovered Nodes screen. (It does not display information about nodes that are powered off or in a
different subnet.) The discovery process normally takes just a few seconds.

Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.

4. Select (check the box for) the nodes to be imaged.


All discovered blocks and nodes are displayed by default, including those that are already in an existing
cluster. An exclamation mark icon is displayed for unavailable (already in a cluster) nodes; these node
cannot be selected. All available nodes are selected by default.
Note: A cluster requires a minimum of three nodes. Therefore, you must select at least three
nodes.

To display just the available nodes, select the Show only new nodes option from the pull-down
menu on the right of the screen. (Blocks with unavailable nodes only do not appear, but a block with
both available and unavailable nodes does appear with the exclamation mark icon displayed for the
unavailable nodes in that block.)
To deselect nodes you do not want to image, uncheck the boxes for those nodes. Alternately, click
the Deselect All button to uncheck all the nodes and then select those you want to image. (The
Select All button checks all the nodes.)
Note: You can get help or reset the configuration at any time from the gear icon pull-down
menu (top right). Internet access is required to display the help pages, which are located in the
Nutanix support portal.

Figure: Discovery Screen

5. Set the replication factor (RF) for the cluster to be created.

Creating a Cluster | Field Installation Guide | Foundation | 13


The replication factor specifies the number of times each piece of data is replicated in the cluster.
Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of any
single node or drive.
Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure of
any two nodes or drives in different blocks. RF 3 requires that the cluster have at least five nodes,
and it can be enabled only when the cluster is created. (In addition, containers must have replication
factor 3 for guest VM data to withstand the failure of two nodes.)
The default setting for a cluster is RF 2. To set it to RF 3, do the following:

a. Click the Change RF (2) button.


The Change Redundancy Factor window appears.

b. Click (check) the RF 3 button and then click the Save Changes button.
The window disappears and the RF button changes to Change RF (3), indicating the RF factor is
now set to 3.

Figure: Change Redundancy Factor Window

6. Click the Next button at the bottom of the screen to configure cluster parameters (see Defining the
Cluster on page 14).

Defining the Cluster


Before you begin: Complete Discovering the Nodes on page 11.
The Define Cluster configuration screen appears. This screen allows you to define a new cluster and
configure global network parameters for the Controller VM, hypervisor, and (optionally) IPMI. It also allows
you to enable diagnostic and health tests after creating the cluster.

Creating a Cluster | Field Installation Guide | Foundation | 14


Figure: Cluster Screen

1. In the Cluster Information section, do the following in the indicated fields:

a. Name: Enter a name for the cluster.

b. IP Address: Enter an external (virtual) IP address for the cluster.


This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and AHV clusters.

2. (optional) Click the Enable IPMI slider button to specify an IPMI address.
When this button is enabled, fields for IPMI global network parameters appear below. Foundation does
not require an IPMI connection, so this information is not required. However, you can use this option to
configure IPMI for your use.

3. In the Network Information section, do the following in the indicated fields:

a. CVM Netmask: Enter the Controller VM netmask value.

b. CVM Gateway: Enter an IP address for the gateway.

c. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configurations, see Controller VM Memory and vCPU
Configurations on page 54.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations. See Controller VM Memory and vCPU
Configurations on page 54 for memory sizing recommendations.

d. Hypervisor Netmask: Enter the hypervisor netmask value.

e. Hypervisor Gateway: Enter an IP address for the gateway.

Creating a Cluster | Field Installation Guide | Foundation | 15


f. Hypervisor DNS Server IP: Enter the IP address of the DNS server.

g. Note: The following fields appear only if the IPMI button was enabled in the previous step.

IPMI Netmask: Enter the IPMI netmask value.

h. IPMI Gateway: Enter an IP address for the gateway.

i. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.

j. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password.

4. (optional) Click the Enable Testing slider button to run the Nutanix Cluster Check (NCC) after the
cluster is created.
The NCC is a test suite that checks a variety of health metrics in the cluster. The results are stored in
the ~/foundation/logs/ncc directory.

5. Click the Next button at the bottom of the screen to configure the cluster nodes (see Setting Up the
Nodes on page 16).

Setting Up the Nodes


Before you begin: Complete Defining the Cluster on page 14.
The Setup Node configuration screen appears. This screen allows you to specify the Controller VM,
hypervisor, and (if enabled) IPMI IP addresses for each node.

Figure: Node Screen

1. In the Hostname and IP Range section, do the following in the indicated fields:

a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should contain
only digits, letters, and hyphens.

Creating a Cluster | Field Installation Guide | Foundation | 16


The base name with a suffix of "-1" is assigned as the host name of the first node, and the base
name with "-2", "-3" and so on are assigned automatically as the host names of the remaining nodes.

b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes.
Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered address is
assigned to the Controller VM of the first node, and consecutive IP addresses (sequentially from
the entered address) are assigned automatically to the remaining nodes. Discovered nodes are
sorted first by block ID and then by position, so IP assignments are sequential. If you do not want
all addresses to be consecutive, you can change the IP address for specific nodes by updating the
address in the appropriate fields for those nodes.

c. Hypervisor IP: Repeat the previous step for this field.


This sets the hypervisor IP addresses for all the nodes.

Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.

d. IPMI IP (when enabled): Repeat the previous step for this field.
This sets the IPMI port IP addresses for all the nodes. This column appears only when IPMI is
enabled on the previous cluster setup screen.

2. In the Manual Input section, review the assigned host names and IP addresses. If any of the names or
addresses are not correct, enter the desired name or IP address in the appropriate field.
There is a section for each block with a line for each node in the block. The letter designation (A, B, C,
and D) indicates the position of that node in the block.

3. When all the host names and IP addresses are correct, click the Validate Network button at the bottom
of the screen.
This does a ping test to each of the assigned IP addresses to check whether any of those addresses
are being used currently.
If there are no conflicts (none of the addresses return a ping), the process continues (see Selecting
the Images on page 17).
If there is a conflict (one or more addresses returned a ping), this screen reappears with the
conflicting addresses highlighted in red. Foundation will not continue until the conflict is resolved.

Selecting the Images


Before you begin: Complete Setting Up the Nodes on page 16.
The Select Images configuration screen appears. This screen allows you to specify and upload the AOS
and hypervisor image files to use.

Creating a Cluster | Field Installation Guide | Foundation | 17


Figure: Images Screen

1. To select the desired AOS image, do the following:

Note: An AOS image may already be present (as in the above example). If one is and it is the
desired version, skip to the next step. If both the desired AOS and hypervisor (step 2) versions
are present, skip to step 3.

a. In the AOS (left) column, click the Upload Acropolis base software Tarball button and then click
the Choose File button.

Figure: File Selection Buttons

b. In the file search window, find and select the AOS tar file downloaded earlier (see Discovering the
Nodes on page 11) and then click the Upload button.
Uploading an image file (AOS or hypervisor) may take some time (possibly a few minutes).

2. To select the desired hypervisor image, do the following:


Note: Foundation comes with an AHV image. If that is the correct image to use, skip to the
next step.

Creating a Cluster | Field Installation Guide | Foundation | 18


a. In the Hypervisor (middle) column, select the hypervisor (AHV, ESX, or HYPERV), click the Upload
ISO button and then click the Choose File button.

b. In the file search window, find and select the hypervisor ISO image downloaded earlier and then
click the Upload button.
Only approved hypervisor ISO images are permitted; Foundation will not image nodes with an
unapproved ISO image. To verify your ISO image is on the approved list, click the See Whitelist link.
Nutanix updates the list as new versions are approved, and the current version of Foundation may
not have the latest list. If your ISO does not appear on the list, click the Update the whitelist link to
download the latest whitelist from the Nutanix support portal.

Figure: Whitelist Compatibility List Window

c. [Hyper-V only] In the SKU (right) column, click the radio button for the Hyper-V version to use.
Three Hyper-V versions are supported: Free, Standard, Datacenter. This column appears only
when you select Hyper-V.

Note: See Hyper-V Installation Requirements on page 57 for additional considerations


when installing a Hyper-V cluster.

3. When both images are uploaded and ready, do one of the following:
To image the nodes and then create the new cluster, click the Create Cluster button at the bottom of
the screen.
To create the cluster without imaging the nodes, click the Skip Imaging button (in either case see
Creating the Cluster on page 20).
Note: The Skip Imaging option requires that all the nodes have the same hypervisor
and AOS version. This option is disabled if they are not all the same (with the exception of
any model NX-6035C "cold" storage nodes in the cluster that run AHV regardless of the
hypervisor running on the other nodes).

Creating a Cluster | Field Installation Guide | Foundation | 19


Creating the Cluster
Before you begin: Complete Selecting the Images on page 17.
After clicking the Create Cluster or Skip Imaging button (in the Select Images screen), the Create Cluster
screen appears. This is a dynamically updated display that provides progress information about node
imaging and cluster creation.

1. Monitor the node imaging and cluster creation progress.


The progress screen includes the following sections:
Progress bar at the top (blue during normal processing or red when there is a problem).
Cluster Creation Status section with a line for the cluster being created (status indicator, cluster
name, progress message, and log link).
Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).

Figure: Foundation Progress Screen: Ongoing

The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. The selected node (see Discovering the Nodes on page 11) is imaged
first. When that imaging is complete, the remaining nodes are imaged in parallel. The imaging process
takes about 30 minutes, so the total time is about an hour (30 minutes for the first node and another 30
minutes for the other nodes imaged in parallel). You can monitor overall progress by clicking the Log
link at the top, which displays the service.log contents in a separate tab or window. Click on the Log
link for a node to display the log file for that node in a separate tab or window.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 21 nodes, add an extra 30 minutes processing time for each group of 20 nodes.

When installation moves to cluster creation, the status message displays the percentage complete and
current step. Cluster creation happens quickly, but this step could take some time if you enabled the
post-creation tests. Click on the Log link for a cluster to display the log file for the cluster in a separate
tab or window.

2. When processing completes successfully, either open the Prism web console and begin configuring the
cluster (see Configuring a New Cluster on page 22) or exit from Foundation.

Creating a Cluster | Field Installation Guide | Foundation | 20


When processing completes successfully, a "Cluster creation successful" message appears. This
means imaging both the hypervisor and Nutanix Controller VM across all the nodes in the cluster was
successful (when imaging was not skipped) and cluster creation was successful.
To configure the cluster, click the Prism link. This opens the Prism web console (login required using
the default "admin" username and password). See Configuring a New Cluster on page 22 for
initial cluster configuration steps.
To download the log files, click the Export Logs link. This packages all the log files into a
log_archive.tar file and allows you to download that file to your workstation.

The Foundation service shuts down two hours after imaging. If you go to the cluster creation success
page after a long absence and the Export Logs link does not work (or your terminal went to sleep
and there is no response after refreshing it), you can point the browser to one of the Controller VM IP
addresses. If the Prism web console appears, installation completed successfully, and you can get the
logs from ~/data/logs/foundation on the node that was imaged first.

Note: If nothing loads when you refresh the page (or it loads one of the configuration pages),
the web browser might have missed the hand-off between the node that starts imaging and the
first node imaged. This can happen because the web browser went to sleep, you closed the
browser, or you lost connectivity for some other reason. In this case, enter http://cvm ip for
any Controller VM, which should open the Prism GUI if imaging has completed. If this does not
work, enter http://cvm ip:8000/guion each of the Controller VMs in the cluster until you see
the progress screen, from which you can continue monitoring progress.

Figure: Foundation Progress Screen: Successful Installation

3. If processing does not complete successfully, review and correct the problem(s), and then restart the
process.
If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
Note: If an imaging problem occurs, it typically appears when imaging the first node. In that
case Foundation will not attempt to image the other nodes, so only the first node will be in an
unstable state. Once the problem is resolved, the first node can be re-imaged and then the
other nodes imaged normally.

Creating a Cluster | Field Installation Guide | Foundation | 21


Figure: Foundation Progress Screen: Unsuccessful Installation

Configuring a New Cluster


After creating the cluster, you can configure it through the Prism web console. A storage pool and a
container are created automatically when the cluster is created, but many other set up options require user
action. The following are common cluster set up steps typically done soon after creating a cluster. (All the
sections cited in the following steps are in the Prism Web Console Guide.)

1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a later version is available (see the "Software and
Firmware Upgrades" section).

b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to any
Controller VM in the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix support for assistance.

2. Specify the timezone of the cluster.


Specifying the timezone must be done from the Nutanix command line (nCLI). While logged in to the
Controller VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone

Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a
cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs
in a series, waiting until one has finished starting before proceeding to the next. See the Command
Reference for more information about using the nCLI.

3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).

Creating a Cluster | Field Installation Guide | Foundation | 22


4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote
support tunnel (see the "Controlling Remote Connections" section).
Caution: Failing to enable remote support prevents Nutanix support from directly addressing
cluster issues. Nutanix recommends that all customers allow email alerts at minimum because
it allows proactive support of customer issues.

5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse
feature (see the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide more informed
and proactive help.

6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring Alert
Policies" section).

7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster
elements, enable that feature (see the "Software and Firmware Upgrades" section).

Note: Allow access to the following through your firewall to ensure that automatic download of
updates can function:
*.compute-*.amazonaws.com:80
release-api.nutanix.com:80

8. License the cluster (see the "License Management" section).

9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
vCenter: See the Nutanix vSphere Administration Guide.
SCVMM: See the Nutanix Hyper-V Administration Guide.

Creating a Cluster | Field Installation Guide | Foundation | 23


3
Imaging Bare Metal Nodes
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on bare metal
nodes and optionally configure the nodes into one or more clusters. "Bare metal" nodes are those that are
not factory prepared or cannot be detected through discovery.
Before you begin:
Note: If you are installing on nodes that can be discovered, see Creating a Cluster on page 11.
Imaging bare metal nodes is restricted to Nutanix sales engineers, support engineers, and
partners.

Physically install the Nutanix cluster at your site. See the Physical Installation Guide for your model type
for installation instructions.
Set up the installation environment (see Preparing Installation Environment on page 25).

Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.

Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.

Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents
virtual media, such as CDROM. This could conflict with the foundation installation when it tries
to mount the virtual CDROM hosting the install ISO.
Have ready the appropriate global, node, and cluster parameter values needed for installation. The use
of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.

Note: If the Foundation VM IP address set previously was configured in one (typically public)
network environment and you are imaging the cluster on a different (typically private) network
in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on
page 25 to configure a new static IP address for the Foundation VM.
To image the nodes and create a cluster(s), do the following:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)

1. Prepare the installation environment:

a. Download necessary files and prepare a workstation (see Preparing a Workstation on page 25).

b. Connect the workstation and nodes to be imaged to the network (Setting Up the Network on
page 29).

2. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on
page 31).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 24


3. Configure the nodes to image (see Configuring Node Parameters on page 34).

4. Select the images to use (see Configuring Image Parameters on page 38).

5. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring
Cluster Parameters on page 39).

6. Start the imaging process and monitor progress (see Monitoring Progress on page 41).

7. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see
Troubleshooting on page 63).

8. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After
Installation on page 44).

Preparing Installation Environment


Standalone (bare metal) imaging is performed from a workstation with access to the IPMI interfaces of the
nodes in the cluster. Imaging a cluster in the field requires first installing certain tools on the workstation
and then setting the environment to run those tools. This requires two preparation tasks:
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)

1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to
installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using
VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on
page 25).
2. Set up the network. The nodes and workstation must have network access to each other through a
switch at the site (see Setting Up the Network on page 29).

Preparing a Workstation
A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the
following:

Note: You can perform these steps either before going to the installation site (if you use a portable
laptop) or at the site (if you can connect to the web).

1. Get a workstation (laptop or desktop computer) that you can use for the installation.
The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk
space (preferably SSD), and a physical (wired) network adapter.

2. Go to the Foundation and AOS download pages in the Nutanix support portal (see Downloading
Installation Files on page 45) and download the following files to a temporary directory on the
workstation.
Foundation_VM_OVF-version#.tar. This tar file includes the following files:

Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version#
release, for example Foundation_VM-2.1.ovf.
Foundation_VM-version#-disk1.vmdk. This is the Foundation VM VMDK file for the version#
release, for example Foundation_VM-2.1-disk1.vmdk.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 25


VirtualBox-version#-[OSX|Win].[dmg|exe]. This is the Oracle VM VirtualBox installer for Mac
OS (VirtualBox-version#-OSX.dmg) or Windows (VirtualBox-version#-Win.exe). Oracle VM
VirtualBox is a free open source tool used to create a virtualized environment on the workstation.

Note: Links to the VirtualBox files may not appear on the download page for every
Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)
nutanix_installer_package-version#.tar.gz. This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the support portal to download this file. (You can
download all the other files from the Foundation download page.)

3. Go to the download location and extract Foundation_VM_OVF-version#.tar by entering the following


command:
$ tar -xf Foundation_VM_OVF-version#.tar

Note: This assumes the tar command is available. If it is not, use the corresponding tar utility
for your environment.

4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.
Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM
VirtualBox.

5. Create a new folder called VirtualBox VMs in your home directory.


On a Windows system this is typically C:\Users\user_name\VirtualBox VMs.

6. Copy the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files to the VirtualBox


VMs folder that you created in step 5.

7. Start Oracle VM VirtualBox.

Figure: VirtualBox Welcome Screen

8. Click the File option of the main menu and then select Import Appliance from the pull-down list.

9. Find and select the Foundation_VM-version#.ovf file, and then click Next.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 26


10. Click the Import button.

11. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.

12. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).

13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,
install Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.

b. Click OK when prompted to Open Autorun Prompt and then click Run.

c. Enter the root password (nutanix/4u) and then click Authenticate.

d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.

e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.

f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.

Note: A reboot is necessary for the changes to take effect.

g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.

14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as
follows:

Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
address now and setting it again when the workstation is on a different (typically private)
network for the installation (see Imaging Bare Metal Nodes on page 24).

a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 27


Figure: Foundation VM: Desktop

b. In the pop-up window, click the Run in Terminal button.

Figure: Foundation VM: Terminal Window

c. In the Select Action box in the terminal window, select Device Configuration.

Note: Selections in the terminal window can be made using the indicated keys only. (Mouse
clicks do not work.)

Figure: Foundation VM: Action Box

d. In the Select a Device box, select eth0.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 28


Figure: Foundation VM: Device Configuration Box

e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by
default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and
then click the OK button.

Figure: Foundation VM: Network Configuration Box

f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action
box.
This save the configuration and closes the terminal window.

15. Copy nutanix_installer_package-version#.tar.gz (downloaded in step 2) to the /home/nutanix/


foundation/nos folder.

16. If you intend to install ESXi or Hyper-V as the hypervisor, download the hypervisor ISO image into the
appropriate folder for that hypervisor.
ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx
Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

Note: You must provide a supported ESXi or Hyper-V ISO image (see Hypervisor ISO Images
on page 49). You do not have to provide an AHV image because Foundation automatically
puts an AHV ISO into /home/nutanix/foundation/isos/hypervisor/kvm. However, if you want
to install a different version of AHV, download the ISO file from the Nutanix support portal (see
Downloading Installation Files on page 45).

Setting Up the Network


The network must be set up properly on site before imaging nodes through the Foundation tool. To set up
the network connections, do the following:

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 29


Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing
tables). A flat switch is often recommended to protect against configuration errors that could
affect the production environment. Foundation includes a multi-homing feature that allows you
to image the nodes using production IP addresses despite being connected to a flat switch (see
Imaging Bare Metal Nodes on page 24). See Network Requirements on page 51 for general
information about the network topology and port access required for a cluster.

1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN
interfaces of the nodes must be in failover mode (factory default setting).
The exact location of the port depends on the model type. See the hardware manual for your model to
determine the port location.
(Nutanix NX Series) The following figure illustrates the location of the network ports on the back of
an NX-3050 (middle RJ-45 interface).

Figure: Port Locations (NX-3050)


(Lenovo Converged HX Series) Unlike Nutanix NX-series systems, which only require that you
connect the 1 GbE port, Lenovo XC-series systems require that you connect both the system
management (IMM) port and one of the 1 GbE or 10 GbE ports. The following figure illustrates the
location of the network ports on the back of the HX3500 and HX5500.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 30


Figure: Port Locations (HX System)
(Dell XC series) Unlike Nutanix NX-series systems, which only require that you connect the 1 GbE
port, Dell XC-series systems require that you connect both the iDRAC port and one of the 1 GbE
ports.

Figure: Port Locations (XC System)

2. Connect the installation workstation (see Preparing a Workstation on page 25) to the same 1 GbE
switch as the nodes.

Configuring Global Parameters


Before you begin: Complete Imaging Bare Metal Nodes on page 24.
Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)

1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.
Note: See Preparing Installation Environment on page 25 if Oracle VM VirtualBox is not
started or the Foundation VM is not running currently. You can also start the Foundation GUI by
opening a web browser and entering http://localhost:8000/gui/index.html.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 31


Figure: Foundation VM Desktop

The Global Configuration screen appears. Use this screen to configure network addresses.

Note: You can access help from the gear icon pull-down menu (top right), but this
requires Internet access. If necessary, copy the help URL to a browser with Internet access.

Figure: Global Configuration Screen

2. In the top section of the screen, enter appropriate values for the IPMI, hypervisor, and Controller VM in
the indicated fields:

Note: The parameters in this section are global and will apply to all the imaged nodes.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 32


Figure: Global Configuration Screen: IPMI, Hypervisor, and CVM Parameters

a. IPMI Netmask: Enter the IPMI netmask value.

b. IPMI Gateway: Enter an IP address for the gateway.

c. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.

d. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password as you type it.

e. Hypervisor Netmask: Enter the hypervisor netmask value.

f. Hypervisor Gateway: Enter an IP address for the gateway.

g. DNS Server IP: Enter the IP address of the DNS server.

h. CVM Netmask: Enter the Controller VM netmask value.

i. CVM Gateway: Enter an IP address for the gateway.

j. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configurations, see Controller VM Memory and vCPU
Configurations on page 54.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations. See Controller VM Memory and vCPU
Configurations on page 54 for memory sizing recommendations.

3. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets,
check the Multi-Homing box in the bottom section of the screen.
When the box is checked, a line appears to enter Foundation VM virtual IP addresses. The purpose
of the multi-homing feature is to allow the Foundation VM to configure production IP addresses when
using a flat switch. Multi-homing assigns the Foundation VM virtual IP addresses on different subnets
so that you can use customer-specified IP addresses regardless of their subnet.
Enter unique IPMI, hypervisor, and Controller VM IP addresses. Make sure that the addresses
match the subnets specified for the nodes to be imaged (see Configuring Node Parameters on
page 34).
If this box is not checked, Foundation requires that either all IP addresses are on the same subnet or
that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 33


Figure: Global Configuration Screen: Multi-Homing

4. Click the Next button at the bottom of the screen to configure the nodes to be imaged (see Configuring
Node Parameters on page 34).

Configuring Node Parameters


Before you begin: Complete Configuring Global Parameters on page 31.
Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)
The Block & Node Config screen appears. This screen allows you to configure discovered nodes and
add other (bare metal) nodes to be imaged. Upon opening this screen, Foundation searches the network
for unconfigured Nutanix nodes (that is, factory prepared nodes that are not part of a cluster) and then
displays information about the discovered blocks and nodes. The discovery process can take several
minutes if there are many nodes on the network. Wait for the discovery process to complete before
proceeding. The message "Searching for nodes. This may take a while" appears during discovery.

Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes
to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition,
Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes, you
must first destroy the existing cluster in order for Foundation to discover those nodes.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 34


Figure: Node Configuration Screen

1. Review the list of discovered nodes.


A table appears with a section for each discovered block that includes information about each node in
the block.
You can exclude a block by clicking the X on the far right of that block. The block disappears from
the display, and the nodes in that block will not be imaged. Clicking the X on the top line removes all
the displayed blocks.
To repeat the discovery process (search for unconfigured nodes again), click the Retry Discovery
button. You can reset all the global and node entries to the default state by selecting Reset
Configuration from the gear icon pull-down menu.

2. To image additional (bare metal) nodes, click the Add Blocks button.
A window appears to add a new block. Do the following in the indicated fields:

Figure: Add Bare Metal Blocks Window

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 35


a. Number of Blocks: Enter the number of blocks to add.

b. Nodes per Block: Enter the number of nodes to add in each block.
All added blocks get the same number of nodes. To add multiple blocks with differing nodes per
block, add the blocks as separate actions.

c. Click the Create button.

The window closes and the new blocks appear at the end of the discovered blocks table.

3. Configure the fields for each node as follows:

a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned
automatically.

b. Position: Uncheck the boxes for any nodes you do not want to be imaged.
The value (A, B, and so on) indicates the node placement in the block such as A, B, C, D for a four-
node block. You can exclude the node in that block position from being imaged by unchecking the
appropriate box. You can check (or uncheck) all boxes by clicking Select All or (Unselect All) above
the table on the right.

c. IPMI MAC Address: For any nodes you added in step 2, enter the MAC address of the system
management interface in this field.
Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is
read-only for discovered nodes and displays a value of "N/A" for those nodes.)

Caution: Any existing data on the node will be destroyed during imaging. If you are using
the add node option to re-image a previously used node, do not proceed until you have
saved all the data on the node that you want to keep.
For Nutanix NX-series nodes, the MAC address of the IPMI interface normally appears on a
label on the back of each node. (Make sure you enter the MAC address from the label that starts
with "IPMI:", not the one that starts with "LAN:".) The MAC address appears in the standard
form of six two-digit hexadecimal numbers separated by colons, for example 00:25:90:D9:01:98.

Figure: (NX Series) IPMI MAC Address Label

For Lenovo Converged HX Series nodes, the MAC address of the IMM interface appears under IMM
LLA2 on a label that is attached to the power supply. Separate the last six pairs of characters with a

colon (for example, EF:FF:FE:03:66:CD).


Figure: (Lenovo Converged HX Series) IMM MAC Address Label

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 36


d. IPMI IP: Do one of the following in this field:

Note: If you are using a flat switch, the IP addresses must be on the same subnet as the
Foundation VM unless you configure multi-homing (see Configuring Global Parameters on
page 31).
To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP
address in that field.
To specify the IPMI addresses automatically, enter a starting IP address in the top line ("Start
IP address" field) of the IPMI IP column. The entered address is assigned to the IPMI port of
the first node, and consecutive IP addresses (starting from the entered address) are assigned
automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by
position, so IP assignments are sequential. If you do not want all addresses to be consecutive,
you can change the IP address for specific nodes by updating the address in the appropriate
fields for those nodes.

Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255
because such addresses are commonly reserved by network administrators.

e. Hypervisor IP: Repeat the previous step for this field.


This sets the hypervisor IP addresses for all the nodes.

f. CVM IP: Repeat the previous step for this field.


This sets the Controller VM IP addresses for all the nodes.

Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.

g. Hypervisor Hostname: Do one of the following in this field:


A host name is automatically generated for each host ( NTNX-unique_identifier). If these names
are acceptable, do nothing in this field.
Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The
automatically generated names might be longer than 15 characters, which would result
in the same truncated name for multiple hosts in a Windows environment. Therefore, do
not use automatically generated names longer than 15 characters when the hypervisor is
Hyper-V.
To specify the host names manually, go to the line for each node and enter the desired name in
that field.
To specify the host names automatically, enter a base name in the top line of the Hypervisor
Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first
node, and the base name with "-2", "-3" and so on are assigned automatically as the host names
of the remaining nodes. You can specify different names for selected nodes by updating the entry
in the appropriate field for those nodes.

h. NX-6035C : Check this box for any node that is a model NX-6035C.
Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs
are not allowed. NX-6035C nodes run AHV (and so will be imaged with AHV) regardless of what
hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 38).

4. To check which IP addresses are active and reachable, click Ping Scan (above the table on the right).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 37


This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP fields. A (returned
response) or (no response) icon appears next to that field to indicate the ping test result for each
node. This feature is most useful when imaging a previously unconfigured set of nodes. None of
the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing
infrastructure.

Note: When re-imaging a configured set of nodes using the same network configuration, failure
to ping indicates a networking issue.

5. Click the Next button at the bottom of the screen to select the images to use (see Configuring Image
Parameters on page 38).

Configuring Image Parameters


Before you begin: Complete Configuring Node Parameters on page 34.
Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)

The Node Imaging configuration screen appears. This screen is for selecting the AOS package and
hypervisor image to use when imaging the nodes.

Figure: Node Imaging Screen

1. Select the hypervisor to install from the pull-down list on the left.
The following choices are available:
ESX. Selecting ESX as the hypervisor displays the NOS Package and Hypervisor ISO Image fields
directly below.
Hyper-V. Selecting Hyper-V as the hypervisor displays the NOS Package, Hypervisor ISO Image,
and SKU fields.

Caution: Nodes must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-
V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on
page 57 for additional considerations when installing a Hyper-V cluster.
AHV. Selecting AHV as the hypervisor displays the NOS Package and Hypervisor ISO Image fields.

2. In the NOS Package field, select the NOS package to use from the pull-down list.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 38


Note: Click the Refresh NOS package link to display the current list of available images in
the ~/foundation/nos folder. If the desired AOS package does not appear in the list, you must
download it to the workstation (see Preparing Installation Environment on page 25).

3. In the Hypervisor ISO Image field, select the hypervisor ISO image to use from the pull-down list.
Note: Click the Refresh hypervisor image link to display the current list of available images
in the ~/foundation/isos/hypervisor/[esx|hyperv] folder. If the desired hypervisor ISO
image does not appear in the list, you must download it to the workstation (see Preparing a
Workstation on page 25). Foundation automatically provides an ISO for AHV imaging in the
~/foundation/isos/hypervisor/kvm folder.

4. [Hyper-V only] In the SKU field, select the Hyper-V version to use from the pull-down list.
Three Hyper-V versions are supported: Free, Standard, Datacenter. This column appears only when
you select Hyper-V.
Note: See Hyper-V Installation Requirements on page 57 for additional considerations
when installing a Hyper-V cluster.

5. When all the settings are correct, do one of the following:


To create a new cluster, click the Next button at the bottom of the screen (see Configuring Cluster
Parameters on page 39).
To start imaging immediately (bypassing cluster configuration), click the Run Installation button at
the top of the screen (see Monitoring Progress on page 41).

Configuring Cluster Parameters


Before you begin: Complete Configuring Image Parameters on page 38.

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)
The Clusters configuration screen appears. This screen allows you to create one or more clusters and
assign nodes to those clusters. It also allows you to enable diagnostic and health tests after creating the
cluster(s).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 39


Figure: Cluster Configuration Screen

1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the
Cluster Creation section at the top of the screen.
This section includes a table that is empty initially. A blank line appears in the table for the new cluster.
Enter the following information in the indicated fields:

a. Cluster Name: Enter a cluster name.

b. External IP: Enter an external (virtual) IP address for the cluster.


This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and AHV clusters. (This applies to NOS 4.0 or
later; it is ignored when imaging an earlier NOS release.)

c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.
Enter a comma separated list to specify multiple server addresses in this field (and the next two
fields).

d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.
You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable
or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.
Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active
Directory domain controller.

e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 40


f. Max Redundancy Factor: Select a redundancy factor (2 or 3) for the cluster from the pull-down list.
This parameter specifies the number of times each piece of data is replicated in the cluster (either 2
or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum
number of nodes required to support that protection.
Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of
any single node or drive.
Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure
of any two nodes or drives in different blocks. A redundancy factor of 3 requires that the cluster
have at least five nodes, and it can be enabled only when the cluster is created. It is an option on
NOS release 4.0 or later. (In addition, containers must have replication factor 3 for guest VM data
to withstand the failure of two nodes.)

2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in
the Post Image Testing section.
Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes
several performance metrics on each node in the cluster. These metrics indicate whether the cluster
is performing properly. The results are stored in the ~/foundation/logs/diagnostics directory.
Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of
tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/
logs/ncc directory. (This test is available on NOS 4.0 or later. Checking the box does nothing on an
earlier NOS release.)

3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes
field to be included in that cluster.
A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes
to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot
be assigned to more than one cluster.
Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to
add to an existing cluster, which can be done through the web console or nCLI at a later time.

4. When all settings are correct, click the Run Installation button at the top of the screen to start the
installation process (see Monitoring Progress on page 41).

Monitoring Progress
Before you begin: Complete Configuring Cluster Parameters on page 39 (or Configuring Image
Parameters on page 38 if you are not creating a cluster).

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)
When all the global, node, and cluster settings are correct, do the following:

1. Click the Run Installation button at the top of the screen.

Figure: Run Installation Button

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 41


This starts the installation process. First, the IPMI port addresses are configured. The IPMI port
configuration processing can take several minutes depending on the size of the cluster.

Figure: IPMI Configuration Status

Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation
process stops before imaging any of the nodes. To correct a port configuration problem, see
Fixing IPMI Configuration Problems on page 63.

2. Monitor the imaging and cluster creation progress.


If IPMI port addressing is successful, Foundation moves to node imaging and displays a progress
screen. The progress screen includes the following sections:
Progress bar at the top (blue during normal processing or red when there is a problem).
Cluster Creation Status section with a line for each cluster being created (status indicator, cluster
name, progress message, and log link).
Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).

Figure: Foundation Progress Screen: Ongoing Installation

The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45
minutes. You can monitor overall progress by clicking the Log link at the top, which displays the
service.log contents in a separate tab or window. Click on the Log link for a node to display the log file
for that node in a separate tab or window.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 42


When installation moves to cluster creation, the status message for each cluster (in the Cluster
Creation Status section) displays the percentage complete and current step. Cluster creation
happens quickly, but this step could take some time if you selected the diagnostic and NCC post-
creation tests. Click on the Log link for a cluster to display the log file for that cluster in a separate
tab or window.
When processing completes successfully, an "Installation Complete" message appears, along with
a green check mark in the Status field for each node and cluster. This means IPMI configuration
and imaging (both hypervisor and Nutanix Controller VM) across all the nodes in the cluster was
successful, and cluster creation was successful (if enabled).

Figure: Foundation Progress Screen: Successful Installation

3. If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 43


Figure: Foundation Progress Screen: Failed Installation

Cleaning Up After Installation


Some information persists after imaging a cluster using Foundation. If you want to use the same
Foundation VM to image another cluster, the persistent information must be removed before attempting
another installation.

To remove the persistent information after an installation, go to a configuration screen and then click the
Reset Configuration option from the gear icon pull-down list in the upper right of the screen.
Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and
returns the Foundation environment to a fresh state.

Figure: Reset Configuration

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 44


4
Downloading Installation Files
Nutanix maintains a support portal where you can download the Foundation and AOS (or Phoenix) files
required to do a field installation. To download the required files, do the following:

1. Open a web browser and log in to the Nutanix Support portal: http://portal.nutanix.com.

2. Click Downloads from the main menu (at the top) and then select the desired page: AOS (NOS) to
download AOS files, Foundation to download Foundation files, or Phoenix to download Phoenix files.

Figure: Nutanix Support Portal Main Screen

3. To download a Foundation installation bundle (see Foundation Files on page 46), go to the
Foundation page and do one (or more) of the following:
To download the Java applet used in discovery (see Creating a Cluster on page 11), click link to
jnlp from online bundle. This downloads nutanix_foundation_applet.jnlp and allows you to start
discovery immediately.
To download an offline bundle containing the Java applet, click offline bundle. This downloads an
installation bundle that can be taken to environments which do not allow Internet access.
To download the standalone Foundation bundle (see Imaging Bare Metal Nodes on page 24), click
Foundation_VM-version#.ovf.tar. (The exact file name varies by release.) This downloads an
installation bundle that includes OVF and VMDK files.
To download an installation bundle used to upgrade standalone Foundation, click
foundation-version#.tar.gz (see Release Notes on page 6).
To download the current hypervisor ISO whitelist, click iso_whitelist.json.

Note: Use the filter option to display the files for a specific Foundation release.

Downloading Installation Files | Field Installation Guide | Foundation | 45


Figure: Foundation Download Screen

4. To download an AOS release bundle, go to the AOS (NOS) page and click the button or link for the
desired release.
Clicking the Download version# button in the upper right of the screen downloads the latest
AOS release. You can download an earlier AOS release by clicking the appropriate Download
version# link under the ADDITIONAL RELEASES heading. The tar file to download is named
nutanix_installer_package-version#.tar.gz.

5. To download a Phoenix ISO image, go to the Phoenix page and click the file name link for the desired
Phoenix ISO image.
Note: Use the filter options to display the files for a specific Phoenix release and the desired
hypervisor type. Phoenix 2.1 or later includes support for all the hypervisors (AHV, ESXi, and
Hyper-V) in a single ISO while earlier versions have a separate ISO for each hypervisor type
(see Phoenix Files on page 47).

6. To download AHV, go to the Hypervisor Details page and click the file name link for the desired AHV
ISO file. AHV ISO files are named installer-el6.nutanix.version#.iso, where version# is the AHV
version.
Download the ISO file if you want to manually image each node by using the Phoenix procedure and
the AHV ISO file that is included with the standalone Foundation VM is not the version that you want.
For information about manually imaging nodes, see Appendix: Imaging a Node (Phoenix) on
page 69.

Foundation Files
The following table describes the files required to install Foundation. Use the latest Foundation version
available unless instructed by Nutanix customer support to use an earlier version.

Downloading Installation Files | Field Installation Guide | Foundation | 46


File Name Description

nutanix_foundation_applet.jnlp This is the Foundation Java applet. This is the file


needed for doing a Controller VM-based installation
(see Creating a Cluster on page 11) supported in
Foundation 3.0 and later releases.
FoundationApplet-offline.zip This is an installation bundle that includes the
Foundation Java applet. Download and extract this
bundle for environments where Internet access is
not allowed.
Foundation_VM-version#.ovf This is the Foundation VM OVF configuration file
where version# is the Foundation version number.
Foundation_VM-version#-disk1.vmdk This is the Foundation VM VMDK file.
Foundation_VM-version#-disk1.qcow2 This is the Foundation VM data disk in qcow2
format.
Foundation_VM-version#.ovf.tar This is a Foundation tar file that contains
the Foundation_VM-version#.ovf and
Foundation_VM-version#-disk1.vmdk files.
Foundation 2.1 and later releases package the OVF
and VMDK files into this TAR file.
Foundation-version#.tar.gz This is a tar file used for upgrading when
Foundation is already installed (see Release Notes
on page 6).
nutanix_installer_package-version#.tar.gz This is the tar file used for imaging the desired
AOS release where version# is a version and build
number. Go to the NOS Releases download page
on the support portal to download this file. (You can
download all the other files from the Foundation
download page.)
iso_whitelist.json This file contains a list of supported ISO images.
Foundation uses the whitelist to validate an ISO
file before imaging (see Selecting the Images on
page 17).
VirtualBox-version#-OSX.dmg This is the Oracle VM VirtualBox installer for Mac
OS where version# is a version and build number.
VirtualBox-version#-Win.exe This is the Oracle VM VirtualBox installer for
Windows.

Phoenix Files
The following table describes the Phoenix ISO files.

Note: Starting with release 2.1, Foundation no longer uses a Phoenix ISO file for imaging.
Phoenix ISO files are now used only for single node imaging (see Appendix: Imaging a Node
(Phoenix) on page 69) and are generated by the user from Foundation and AOS tar files. The
Phoenix ISOs available on the support portal are only for those who are using an older version of
Foundation (pre 2.1).

Downloading Installation Files | Field Installation Guide | Foundation | 47


File Name Description

phoenix-x.x_NOS-y.y.y.iso This is the Phoenix ISO image for a selected AOS


version where x.x is the Phoenix version number
and y.y.y is the AOS version number. This version
applies to any hypervisor (AHV, ESXi, and Hyper-
V), and there is a separate file for each supported
AOS version. Version 2.1 and later (unlike earlier
versions) support a single Phoenix ISO that applies
across multiple hypervisors.
phoenix-x.x_ESX_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or
earlier) for a selected AOS version on the ESXi
hypervisor where x.x is the Phoenix version number
and y.y.y is the AOS version number. There is a
separate file for each supported AOS version.
phoenix-x.x_HYPERV_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or
earlier) for a selected AOS version on the Hyper-
V hypervisor. There is a separate file for each
supported AOS version.
phoenix-x.x_KVM_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0
or earlier) for a selected AOS version on the
KVM hypervisor. There is a separate file for each
supported AOS version.

Downloading Installation Files | Field Installation Guide | Foundation | 48


5
Hypervisor ISO Images
An AOS ISO image is included as part of Foundation. However, customers must provide an ESXi or Hyper-
V ISO image for those hypervisors. Check with your VMware or Microsoft representative, or download an
ISO image from an appropriate VMware or Microsoft support site:
VMware Support: http://www.vmware.com/support.html
Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available
on the VMware website (www.vmware.com) at Downloads > Product Downloads >
vSphere > Custom ISOs.
Microsoft Technet: http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx
Microsoft EA portal: http://www.microsoft.com/licensing/licensing-options/enterprise.aspx
MSDN: http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052
The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate ISO
images. ISO files are identified in the whitelist by their MD5 value (not file name), so verify that the MD5
value of the ISO you want to use matches the corresponding one in the whitelist. You can download the
current whitelist from the Foundation page on the Nutanix support portal: https://portal.nutanix.com/#/page/
foundation/list
Note: The ISO images in the whitelist are the ones supported in Foundation, but some might no
longer be available from the download sites.

The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.

iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that
supports this ISO image. For example, "2.1"
indicates you can install this ISO image using
Foundation version 2.1 or later (but not an earlier
version).
hypervisor Displays the hypervisor type (esx, hyperv, or
kvm). The "kvm" designation means the Acropolis
hypervisor (AHV). Entries with a "linux" hypervisor
are not available; they are for Nutanix internal use
only.
min_nos Displays the earliest AOS version compatible with
this hypervisor ISO. A null value indicates there are
no restrictions.

Hypervisor ISO Images | Field Installation Guide | Foundation | 49


Name Description
friendly_name Displays a descriptive name for the hypervisor
version, for example "ESX 6.0" or "Windows
2012r2".
version Displays the hypervisor version, for example "6.0"
or "2012r2".
unsupported_hardware Lists the Nutanix models on which this ISO cannot
be used. A blank list indicates there are no model
restrictions. However, conditional restrictions such
as the limitation that Haswell-based models support
only ESXi version 5.5 U2a or later may not be
reflected in this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter, standard,
and free) are supported with this ISO image. In
most cases, only datacenter and standard are
supported.
compatible_versions Reflects through regular expressions the hypervisor
versions that can co-exist with the ISO version in an
Acropolis cluster (primarily for internal use).

The following is a sample entry from the whitelist for an ESXi 6.0 image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
}

Hypervisor ISO Images | Field Installation Guide | Foundation | 50


6
Network Requirements
When configuring a Nutanix block, you will need to ask for the IP addresses of components that should
already exist in the customer network, as well as IP addresses that can be assigned to the Nutanix cluster.
You will also need to make sure to open the software ports that are used to manage cluster components
and to enable communication between components such as the Controller VM, Web console, Prism
Central, hypervisor, and the Nutanix hardware.

Existing Customer Network

You will need the following information during the cluster configuration:
Default gateway
Network mask
DNS server
NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the IP address
and port number of that server when enabling Nutanix support on the cluster.

New IP Addresses

Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:
IPMI interface
Hypervisor host
Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller
VMs and hypervisor hosts can be on this network, which must be isolated and protected.

Software Ports Required for Management and Communication

The following Nutanix network port diagrams show the ports that must be open for supported hypervisors.
The diagrams also shows ports that must be opened for infrastructure services.

Network Requirements | Field Installation Guide | Foundation | 51


Figure: Nutanix Network Port Diagram for VMware ESXi

Figure: Nutanix Network Port Diagram for the Acropolis Hypervisor

Network Requirements | Field Installation Guide | Foundation | 52


Figure: Nutanix Network Port Diagram for Microsoft Hyper-V

Network Requirements | Field Installation Guide | Foundation | 53


7
Controller VM Memory Configurations

Controller VM Memory and vCPU Configurations


This topic lists the recommended Controller VM memory allocations for models and features.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
Default configuration for all platforms 16 16 8
unless otherwise noted

The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.

Nutanix Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
NX-1020 12 12 4
NX-6035C 24 24 8
NX-6035-G4 24 16 8
NX-8150 32 32 8
NX-8150-G4 32 32 8
NX-9040 32 16 8
NX-9060-G4 32 16 8

Dell Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
XC730xd-24 32 16 8

Controller VM Memory Configurations | Field Installation Guide | Foundation | 54


Platform Recommended Default Memory vCPUs
Memory (GB) (GB)
XC6320-6AF 32 16 8
XC630-10AF 32 16 8

Lenovo Platforms

Platform Default Memory (GB) vCPUs


HX-5500 24 8
HX-7500 24 8

Controller VM Memory Configurations for Features

The following table lists the minimum amount of memory required when enabling features. The memory
size requirements are in addition to the default or recommended memory available for your platform
(Nutanix, Dell, Lenovo) as described in Controller VM Memory Configurations for Base Models. Adding
features cannot exceed 16 GB in additional memory.

Note: Default or recommended platform memory + memory required for each enabled feature =
total Controller VM Memory required

Feature(s) Memory (GB)


Capacity Tier Deduplication (includes Performance Tier Deduplication) 16
Redundancy Factor 3 8
Performance Tier Deduplication 8
Cold Tier nodes (6035-C) + Capacity Tier Deduplication 4
Performance Tier Deduplication + Redundancy Factor 3 16
Capacity Tier Deduplication + Redundancy Factor 3 16

Controller VM Memory and vCPU Configurations (Broadwell/G5)


This topic lists the recommended Controller VM memory allocations for workload categories.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
Default configuration for all platforms 16 16 8
unless otherwise noted

Controller VM Memory Configurations | Field Installation Guide | Foundation | 55


The following table show the minimum amount of memory required for the Controller VM on each node for
platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (Broadwell/G5) on page 56.
Note: To calculate the number of vCPUs for your model, use the number of physical cores per
socket in your model. The minimum number of vCPUS your Controller VM can have is eight and
the maximum number is 12.
If your CPU has less than eight logical cores, allocate a maximum of 75 percent of the cores of a
single CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.

Nutanix Broadwell Models

The following table displays the categories for the platforms.


Platform Default Memory (GB)
VDI, server virtualization 16
Storage only 24
Light Compute 24
Large server, high-performance, all-flash 32

Platform Workload Translation (Broadwell/G5)


The following table maps workload types to the corresponding Nutanix and Lenovo models.

Workload Nutanix Lenovo


Features NX Model HX Model

VDI NX-1065S-G5 HX3310


SX-1065-G5 HX3310-F
NX-1065-G5 HX2310-E
NX-3060-G5 HX3510-G
NX-3155G-G5 HX3710
NX-3175-G5 HX3710-F
- HX2710-E

Storage Heavy NX-6155-G5 HX5510


NX-8035-G5 -
NX-6035-G5 -

Light Compute nodes NX-6035C-G5 HX5510-C

High Performance and All-Flash NX-8150-G5 HX7510


NX-9060-G5 -

Controller VM Memory Configurations | Field Installation Guide | Foundation | 56


8
Hyper-V Installation Requirements
Ensure that the following requirements are met before installing Hyper-V.

Windows Active Directory Domain Controller

Requirements:
The primary domain controller version must at least be 2008 R2.
Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.
Active Directory Web Services (ADWS) must be installed and running. By default, connections are
made over TCP port 9389, and firewall policies must enable an exception on this port for ADWS.
To test that ADWS is installed and running on a domain controller, log on by using a domain
administrator account in a Windows host other than the domain controller host that is joined to the
same domain and has the RSAT-AD-Powershell feature installed, and run the following PowerShell
command. If the command prints the primary name of the domain controller, then ADWS is installed and
the port is open.
> (Get-ADDomainController).Name

If the free version of Hyper-V is installed, the primary domain controller server must not block
PowerShell remoting.
To test this scenario, log in by using a domain administrator account in a Windows host and run the
following PowerShell command.
> Invoke-Command -ComputerName (Get-ADDomainController).Name -ScriptBlock {hostname}

If the command prints the name of the Active Directory server hostname, then PowerShell remoting to
the Active Directory server is not blocked.

The domain controller must run a DNS server.


Note: If any of the above requirements are not met, you need to manually create an Active
Directory computer object for the Nutanix storage in the Active Directory, and add a DNS entry
for the name.
Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
Accounts and Privileges:
An Active Directory account with permission to create new Active Directory computer objects for either a
container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this account
are not stored anywhere.
An account that has sufficient privileges to join a Windows host to a domain. The credentials of this
account are not stored anywhere. These credentials are only used to join the hosts to the domain.
Additional Information Required:
The IP address of the primary domain controller.

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 57


Note: The primary domain controller IP address is set as the primary DNS server on all
the Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to keep the
Controller VM, host, and Active Directory time synchronized.
The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:
The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server 2012 or a
newer version.
The SCVMM server must allow PowerShell remoting.
To test this scenario, log in by using the SCVMM administrator account in a Windows host and run the
following PowerShell command on a Windows host that is different to the SCVMM host (for example,
run the command from the domain controller). If they print the name of the SCVMM server, then
PowerShell remoting to the SCVMM server is not blocked.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username

Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.
Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM
setup manually by using the SCVMM user interface.

The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the
following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username

Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.

The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to
False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}

Replace scvmm_server_name with the SCVMM host name.


This can be set to True by a domain policy. In this case, the domain policy should be modified to set it
to False. Also, if it is True, this can be configured back to False, but might not get changed throughout
if there is a policy that reverts it back to True. To change it, you can use the following command in the
PowerShell on the SCVMM host by logging in as a domain administrator.
Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

If you are changing it from True to False, it is important to confirm that the policies that are on the
SCVMM host have the correct value. On the SCVMM host run rsop.msc to review the resultant set
of policy details, and verify the value by navigating to, Servername > Computer Configuration >
Windows Settings > Security Settings > Local Policies > Security Options: Policy Microsoft
network client: Digitally sign communications (always). The value displayed in RSOP must
be, Disabled or Not Defined for the change to persist. Also, the group policies that have been
configured in the domain to apply to the SCVMM server should to be updated to change this to

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 58


Disabled, if the RSOP shows it as Enabled. Otherwise, the RequireSecuritySignature changes
back to True at a later time. After setting the policy in Active Directory and propagating to the domain
controllers, refresh the SCVMM server policy by running the command gpupdate /force. Confirm in
RSOP that the value is Disabled.

Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster.
In this case, it is important to ensure that the time remains synchronized between the Active
Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and the
Controller VMs set their NTP server as the Active Directory server, so it should be sufficient to
ensure that Active Directory domain is configured correctly for consistent time synchronization.

Accounts and Privileges:


When adding a host or a cluster to the SCVMM, the run-as account you are specifying for managing the
host or cluster must be different from the service account that was used to install SCVMM.
Run-as account must be a domain account and must have local administrator privileges on the Nutanix
hosts. This can be a domain administrator account. When the Nutanix hosts are joined to the domain,
the domain administrator accounts automatically takes administrator privileges on the host. If the
domain account used as the run-as account in SCVMM is not a domain administrator account, you
need to manually add it to the list of local administrators on each host by running sconfig.
SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution
privileges.
If you want to install SCVMM server, a service account with local administrator privileges on the
SCVMM server.

IP Addresses

One IP address for each Nutanix host.


One IP address for each Nutanix Controller VM.
One IP address for each Nutanix host IPMI interface.
One IP address for the Nutanix storage cluster.
One IP address for the Hyper-V failover cluster.
Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same
subnet.

DNS Requirements

Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added
to the DNS server during domain joining.
The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be
added to the DNS server when the storage cluster is joined to the domain.
The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server when the failover cluster is created.
After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the
SCVMM server (if applicable), or any other host that needs access to the Nutanix storage, for example,
a host running the Hyper-V Manager.

Storage Access Requirements

Any host that needs to access Nutanix storage must run at least Windows 8 or later version if it is a
desktop client, and Windows 2012 or later version if it is running Windows Server. This requirement is
because SMB 3.0 support is required for accessing Nutanix storage.
The IP address of the host must be whitelisted in the Nutanix storage cluster.

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 59


Note: The SCVMM host IP address is automatically included in the whitelist during the setup.
For other IP addresses, for example, if a backup server also needs a direct SMB access, you
can add those source addresses to the whitelist after the setup configuration is completed by
using the Web Console or the nCLI cluster add-to-nfs-whitelist command.
For accessing a Nutanix SMB share from Windows 10 or Windows Server 2016, you must enable
Kerberos on the Nutanix cluster.
If Kerberos is not enabled in the Nutanix storage cluster (the default configuration), then the SMB client
in the host must not have RequireSecuritySignature set to True. For more information about checking
the policy, see SCVMM requirements. You can verify this by running Get-SmbClientConfiguration in the
host. If the SMB client is running in a Windows desktop instead of Windows Server, the account used to
log on into the desktop should not be linked to an external Microsoft account.
If Kerberos is enabled in the Nutanix storage cluster, you can access the storage only by using the DNS
name of the Nutanix storage cluster, and not by using the external IP address of the cluster.
Virtual Machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the
external IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and
thereby compromises performance and scalability.

Host Maintenance Requirements

When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time,
ensuring that Nutanix services comes up fully in the Controller VM of the restarted host before updating
the next host. This can be accomplished by using Cluster Aware Updating and using a Nutanix-provided
script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-
update script ensures that the Nutanix services go down on only one host at a time ensuring availability
of storage throughout the update procedure. For more information about Cluster Aware Updating, see
the Nutanix Hyper-V Administration guide.

Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 60


9
Setting IPMI Static IP Address
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.
To configure a static IP address for the IPMI port on a node, do the following:

1. Connect a VGA monitor and USB keyboard to the node.

2. Power on the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter key.

6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up window.

7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 61


8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in
the pop-up window.

9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up
window.

10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network
gateway in the pop-up window.

11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup
mode.

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 62


10
Troubleshooting
This section provides guidance for fixing problems that might occur during a Foundation installation.
For help with IPMI configuration problems in a bare metal workflow, see Fixing IPMI Configuration
Problems on page 63.
For help with imaging problems, see Fixing Imaging Problems on page 64.
For answers to other common questions, see Frequently Asked Questions (FAQ) on page 65.

Fixing IPMI Configuration Problems


In a bare metal workflow when the IPMI port configuration fails for one or more nodes in the cluster, or
it works but type detection fails and complains that it cannot reach an IPMI IP address, the installation
process stops before imaging any of the nodes. (Foundation will not go to the imaging step after an IPMI
port configuration failure, but it will try to configure the port address on all nodes before stopping.) Possible
reasons for a failure include the following:
One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the Block &
Node Config screen and correct the IPMI MAC and IP addresses as needed (see Configuring Node
Parameters on page 34).
There is a user name/password mismatch. Go to the Global Configuration screen and correct the IPMI
username and password fields as needed (see Configuring Global Parameters on page 31).
One or more nodes are connected to the switch through the wrong network interface. Go to the back of
the nodes and verify that the first 1GbE network interface of each node is connected to the switch (see
Setting Up the Network on page 29).
The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes
or the IPMI interface for added (bare metal or undiscovered) nodes. This problem typically occurs
because (a) you are not using a flat switch, (b) some node IP addresses are not in the same subnet as
the Foundation VM, and (c) multi-homing was not configured.
If all the nodes are in the Foundation VM subnet, go to the Block & Node Config screen and correct
the IP addresses as needed (see Configuring Node Parameters on page 34).
If the nodes are in multiple subnets, go to the Global Configuration screen and configure multi-
homing (see Configuring Global Parameters on page 31).
The IPMI interface is not set to failover. You can check for this through the BIOS (see Setting IPMI
Static IP Address on page 61 to access the BIOS setup utility).
To identify and resolve IPMI port configuration problems, do the following:

1. Go to the Block & Node Config screen and review the problem IP address for the failed nodes (nodes
with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This
can help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and
the individual node log files for more detailed information.

Troubleshooting | Field Installation Guide | Foundation | 63


Figure: Foundation: IPMI Configuration Error

2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button
at the top of the screen.

Figure: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at
the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those
nodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this case
you must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 61).

Fixing Imaging Problems


When imaging fails for one or more nodes in the cluster, the progress bar turns red and a red check
appears next to the hypervisor address field for any node that was not imaged successfully. Possible
reasons for a failure include the following:
A type failure was detected. Check connectivity to the IPMI (bare metal workflow).
There were network connectivity issues such as the following:
The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.
[Hyper-V only] SAMBA is not up. If Hyper-V complains that it failed to mount the install share, restart
SAMBA with the command " sudo service smb restart ".
Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some
space by deleting extraneous ISO images. In addition, a Foundation crash could leave a /tmp/tmp*
directory that contains a copy of an ISO image which you can unmount (if necessary) and delete.
Foundation needs about 9 GB of free space for Hyper-V and about 3 GB for ESXi or AHV.
The host boots but complains it cannot reach the Foundation VM. The message varies per hypervisor.
For example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned with an error" error
message. Make sure you have assigned the host an IP address on the same subnet as the Foundation
VM or you have configured multi-homing (see (see Configuring Global Parameters on page 31). Also
check for IP address conflicts.
To identify and resolve imaging problems, do the following:

1. See the individual log file for any failed nodes for information about the problem.
Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/
foundation.out[.timestamp]
Bare metal location for Foundation logs: /home/nutanix/foundation/log

Troubleshooting | Field Installation Guide | Foundation | 64


2. When you have corrected the problems and are ready to try again, click the Image Nodes (bare metal
workflow) button.

Figure: Image Nodes Button (bare metal)

3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a
time (see Appendix: Imaging a Node (Phoenix) on page 69).

Frequently Asked Questions (FAQ)


This section provides answers to some common Foundation questions.

Installation Issues

What steps should I take when I encounter a problem?


Click the appropriate log link in the progress screen (see Creating the Cluster on page 20 or Monitoring
Progress on page 41) to view the relevant log file. In most cases the log file should provide some
information about the problem near the end of the file. If that information (plus the information in this
troubleshooting section) is sufficient to identify and solve the problem, fix the issue and then restart the
imaging process.
If you were unable to fix the problem, open a Nutanix support case. You can do this from the Nutanix
support portal (https://portal.nutanix.com/#/page/cases/form?targetAction=new). Upload relevant log
files as requested. The log files are in the following locations:
Bare metal location for Foundation logs: /home/nutanix/foundation/log in your Foundation VM.
This directory contains a service.log file for Foundation-related log messages, a log file for each
node being imaged (named node_0.log, node_1.log, and so on), a log file for each cluster being
created (named cluster_0.log, cluster_1.log, and so on), and http.access and http.error
files for server-related log messages. Logs from past installations are stored in /home/nutanix/
foundation/log/archive. In addition, the state of the current install process is stored in /home/
nutanix/foundation/persisted_config.json. You can download the entire log archive from the
following URL: http://foundation_ip:8000/foundation/log_archive.tar
Controller VM location for Foundation logs: ~/data/logs/foundation (see preceding content
description) and ~/data/logs/foundation.out[.timestamp], which corresponds to the service.log
file.

I want to troubleshoot the operating system installation during cluster creation.


Point a VNC console to the hypervisor host IP address of the target node at port 5901.

I need to restart Foundation on the Controller VM.


To restart Foundation, log on to the Controller VM with SSH and then run the following command:
nutanix@cvm$ pkill foundation && genesis restart

My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP
assignment will take some time.) If you get a complaint about authentication, double-check your
password. If the problem persists, try resetting the BMC.

Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.

Troubleshooting | Field Installation Guide | Foundation | 65


Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this
setting by logging into IPMI and going to Configuration > Network > Lan Interface. Verify that the
setting is Failover (not Dedicate).

The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not
complete (hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not
provide the performance necessary to run this test at a reasonable speed.

Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor>
and the install hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as the
first boot device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select
"restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the
nodes and retry the installation.

I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for
the call back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal
in the Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start

Refresh the Foundation web page. If the nodes are still stuck, reboot them.

I need to reset a block to the default state.


Using the bare metal imaging workflow, download the desired Phoenix ISO image for AHV from the
support portal (see https://portal.nutanix.com/#/page/phoenix/list). Boot each node in the block to that
ISO and follow the prompts until the re-imaging process is complete. You should then be able to use
Foundation as usual.

The cluster create step is not working.


If you are installing NOS 3.5 or later, check the service.log file for messages about the problem. Next,
check the relevant cluster log (cluster_X.log) for cluster-specific messages. The cluster create step
in Foundation is not supported for earlier releases and will fail if you are using Foundation to image a
pre-3.5 NOS release. You must create the cluster manually (after imaging) for earlier NOS releases.

I want to re-image nodes that are part of an existing cluster.


Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during discovery.)
My Foundation VM is complaining that it is out of disk space. What can I delete to make room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*

If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.

I keep seeing the message "tar: Exiting with failure status due to previous errors'tar rf /home/
nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./
persisted_config.json' failed; error ignored."

Troubleshooting | Field Installation Guide | Foundation | 66


This is a benign message. Foundation archives your persisted configuration file (persisted_config.json)
alongside the logs. Occasionally, there is no configuration file to back up. This is expected, and you may
ignore this message with no ill consequences.

Imaging fails after changing the language pack.


Do not change the language pack. Only the default English language pack is supported. Changing the
language pack can cause some scripts to fail during Foundation imaging. Even after imaging, character
set changes can cause problems for NOS.

[Hyper-V] I cannot reach the CVM console via ssh. How do I get to its console?
See KB article 1701 (https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008fJhCAI).

[ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it. See KB
article 1467 ( https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008dDxCAI).

Network and Workstation Issues

I am having trouble installing VirtualBox on my Mac.


Turning off the WiFi can sometimes resolve this problem. For help with VirtualBox issues, see https://
www.virtualbox.org/wiki/End-user_documentation.
There can be a problem when the USB Ethernet adapter is listed as a 10/100 interface instead of a 1G
interface. To support a 1G interface, it is recommend that MacBook Air users connect to the network
with a thunderbolt network adapter rather than a USB network adapter.

I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM
on VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.

I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically
creates a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run
the following commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now

This should reboot your machine and reset your adapter to eth0.

I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but
discovery is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6
link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in.
(If they are, the Controller VMs might choose to direct their traffic over that interface and never reach
your Foundation VM.) If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and
the 10G ports are connected.

The switch is dropping my IPMI connections in the middle of imaging.


If your network connection seems to be dropping out in the middle of imaging, try using an unmanaged
switch with spanning tree protocol disabled.

Foundation is stalled on the ping home phase.

Troubleshooting | Field Installation Guide | Foundation | 67


The ping test will wait up to two minutes per NIC to receive a response, so a long delay in the ping
phase indicates a network connection issue. Check that your 10G cables are unplugged and your 1G
connection can reach Foundation.

How do I install on a 10/100 switch?


A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may see
timeouts. It is highly recommend that you use a 1G or 10G switch if it is available to you.

Troubleshooting | Field Installation Guide | Foundation | 68


11
Appendix: Imaging a Node (Phoenix)
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on a new or
replacement node from a hypervisor ISO image and a Phoenix (Controller VM installation tool) ISO image.
The steps differ depending on the hardware manufacturer (see Summary: Imaging Nutanix NX Series
Nodes on page 69 or Summary: Imaging Lenovo Converged HX Series Nodes on page 70).
Note: When adding a node to an existing cluster running Acropolis 4.5 or later, add that node
through the Prism web console, which allows you to install the hypervisor as part of the add
process. See the "Expanding a Cluster" section in the Web Console Guide for this procedure.
You can also use Foundation to image a single node (see Creating a Cluster on page 11 or
Imaging Bare Metal Nodes on page 24), so the procedure described in this appendix is typically
not necessary or recommended. However, you can use it to image a single node when using Prism
or Foundation is not a viable option, such as hardware replacement scenarios.

Summary: Imaging Nutanix NX Series Nodes


Before you begin: If you are adding a new node, physically install that node at your site. See the Physical
Installation Guide for your model type for installation instructions. Imaging a new or replacement node can
be done either through the system management interface, which requires a network connection, or through
a direct attached USB. These instructions assume you are installing through the system management
interface.
Note: Imaging a node using Phoenix is restricted to Nutanix sales engineers, support engineers,
and partners. Contact Nutanix customer support or your partner for help with this procedure.

1. Set up the installation environment (see Preparing the Controller VM ISO Image on page 70).

2. Attach the hypervisor ISO image (see Installing a Hypervisor (Nutanix NX Series Platforms) on
page 71).

3. Install the desired hypervisor.


Installing ESXi (Nutanix NX Series Platforms) on page 73
Installing Hyper-V (Nutanix NX Series Platforms) on page 74
Installing AHV (Nutanix NX Series Platforms) on page 77

4. Attach the Controller VM ISO image (see Attaching the Controller VM Image (Nutanix NX Series
Platforms) on page 77).

5. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM on
page 83).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 69


Summary: Imaging Lenovo Converged HX Series Nodes
Before you begin: Physically prepare the node to be imaged (as needed). See the Lenovo Converged HX
Series Hardware Replacement Guide (https://support.lenovo.com/us/en/docs/um104436) for information
on how to replace the boot drive.

Note: Imaging a node using Phoenix is restricted to Lenovo sales engineers, support engineers,
and partners.

1. Create the Controller VM ISO image (see Preparing the Controller VM ISO Image on page 70).

2. Attach the hypervisor ISO image (/home/nutanix/foundation/isos/hypervisor/kvm for AHV or


wherever you downloaded the ESXi ISO imagesee Attaching an ISO Image (Lenovo Converged HX
Series Platforms) on page 80).

3. Install the desired hypervisor.


Installing ESXi (Lenovo Converged HX Series Platforms) on page 82
Installing AHV (Lenovo Converged HX Series Platforms) on page 83

4. Attach the Controller VM ISO image (phoenix-version#_NOS-version#.isosee Attaching an ISO


Image (Lenovo Converged HX Series Platforms) on page 80).

5. Install the Controller VM and provision the hypervisor (see Installing the Controller VM on page 83).

Preparing the Controller VM ISO Image


Before you can image a new or replacement node, you need to create an installation image. You need to
run the installation image creation utility from a standalone Foundation VM.
To create the installation image for a node in the field, do the following:

1. Prepare the installation environment as described in Preparing Installation Environment. (You do not
have to perform the steps described in Setting Up the Network, because you need the Foundation VM
only to create the images you need.)

2. Kill the Foundation service running in the Foundation VM.


$ sudo pkill -9 foundation

3. Navigate to the /home/nutanix/foundation/nos directory and unpack the compressed AOS tar archive.
$ gunzip nutanix_installer_package-version#.tar.gz

Note: If either the tar or gunzip command is not available, use the corresponding tar or
gunzip utility for your environment.

4. Create the Phoenix ISO.


$ cd /home/nutanix/foundation/bin
$ ./foundation --generate_phoenix --nos_package=aos_tar_archive

Replace aos_tar_archive with the full path to the AOS tar archive,
nutanix_installer_package-version#.tar. The command creates a Phoenix ISO image in the current
directory. The Phoenix ISO is named phoenix-version#_NOS-version#.iso, and is the file to use when
Installing the Controller VM on page 83.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 70


What to do next: Install the hypervisor.

Nutanix NX Series Platforms

Installing a Hypervisor (Nutanix NX Series Platforms)


This procedure describes how to install a hypervisor on a single node in a cluster in the field.
Before you begin: Prepare for installation by following Preparing the Controller VM ISO Image on
page 70.

Caution: The node must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on a
node with less DOM capacity will fail.
To install a hypervisor on a new or replacement node in the field, do the following:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)

1. Verify you have access to the IPMI interface for the node.

a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 or 10 GbE port connection is not required for imaging the node.

b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 61.

2. Open a Web browser to the IPMI IP address of the node to be imaged.

3. Enter the IPMI login credentials in the login screen.


The default value for both user name and password is ADMIN (upper case).

Figure: IPMI Console Login Screen

The IPMI console main screen appears.


Note: The following steps might vary depending on the IPMI version on the node.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 71


Figure: IPMI Console Screen

4. Select Console Redirection from the Remote Console drop-down list of the main menu, and then
click the Launch Console button.

Figure: IPMI Console: Remote Control Menu

5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.

Figure: IPMI Remote Console: Virtual Media Menu

6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type
field drop-down list, and click the Open Image button.

Figure: IPMI Virtual Storage Window

7. In the browse window, go to where the hypervisor ISO image is located (/home/nutanix/foundation/
isos/hypervisor/kvm for AHV or wherever you downloaded the ESXi or Hyper-V ISO image), select
that file, and then click the Open button.

8. Click the Plug In button and then the OK button to close the Virtual Storage window.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 72


9. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.
This causes the system to reboot using the selected hypervisor image.

Figure: IPMI Remote Console: Power Control Menu

What to do next: Complete installation by following the steps for the hypervisor:
Installing ESXi (Nutanix NX Series Platforms) on page 73
Installing Hyper-V (Nutanix NX Series Platforms) on page 74
Installing AHV (Nutanix NX Series Platforms) on page 77

Installing ESXi (Nutanix NX Series Platforms)


Before you begin: Complete Installing a Hypervisor (Nutanix NX Series Platforms) on page 71.

1. Click Continue at the installation screen and then accept the end user license agreement on the next
screen.

Figure: ESXi Installation Screen

2. On the Select a Disk screen, select the SATADOM as the storage device, click Continue, and then click
OK in the confirmation window.

Figure: ESXi Device Selection Screen

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 73


3. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

4. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

5. Review the information on the Install Confirm screen and then click Install.

Figure: ESXi Installation Confirmation Screen

The installation begins and a dynamic progress bar appears.

6. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the Plug
Out button, and then return to the Installation Complete screen and click Reboot.

What to do next: After the system reboots, you can install the Nutanix Controller VM and provision the
hypervisor (see Installing the Controller VM on page 83).

Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so can
cause the Controller VM install to fail.

Installing Hyper-V (Nutanix NX Series Platforms)


Before you begin: Complete Installing a Hypervisor (Nutanix NX Series Platforms) on page 71.

1. Start the installation.

2. At the Press any key to boot from CD or DVD prompt, press SHIFT+F10.
A command prompt appears.

3. Partition and format the DOM.

a. Start the disk partitioning utility.


> diskpart

b. List the disks to determine which one is the 60 GB SATA DOM.


list disk

c. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk
and then run the clean command:
select disk number
clean

d. Create and format a primary partition (size 1024 and file system fat32).
create partition primary size=1024
select partition 1
format fs=fat32 quick

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 74


e. Create and format a second primary partition (default size and file system ntfs).
create partition primary
select partition 2
format fs=ntfs quick

f. Assign the drive letter "C" to the DOM install partition volume.
list volume
list partition

This displays a table of logical volumes and their associated drive letter, size, and file system type.
Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which
is the DOM install partition) is drive letter "C", go to the next step.
Otherwise, do one of the following:
If drive letter "C" is assigned currently to another volume, enter the following commands to
remove the current "C" drive volume and reassign "C" to the DOM install partition volume:
select volume cdrive_volume_id#
remove
select volume dom_install_volume_id#
assign letter=c

If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the
DOM install partition volume:
select volume dom_install_volume_id#
assign letter=c

g. Exit the diskpart utility.


exit

4. Continue installation of the hypervisor.

a. Start the server setup utility.


> setup.exe

b. In the language selection screen that reappears, again just click the Next button.

c. In the install screen that reappears click the Install now button.

d. In the operating system screen, select Windows Server 2012 Datacenter (Server Core
Installation) and then click the Next button.

Figure: Hyper-V Operating System Screen

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 75


e. In the license terms screen, check the I accept the license terms box and then click the Next
button.

f. In the type of installation screen, select Custom: Install Windows only (advanced).

Figure: Hyper-V Install Type Screen

g. In the where to install screen, select Partition 2 (the NTFS partition) of the DOM disk you just
formatted and then click the Next button.
Ignore the warning about free space. The installation location is Drive 6 Partition 2 in the example.

Figure: Hyper-V Install Disk Screen

The installation begins and a dynamic progress screen appears.

Figure: Hyper-V Progress Screen

h. After the installation is complete, manually boot the host.

5. After Windows boots up, click Ctrl-Alt-Delete and then log in as Administrator when prompted.

6. Change your password when prompted to nutanix/4u.

7. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM on
page 83).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 76


8. Open a command prompt and enter the following two commands:
> schtasks /create /sc onstart /ru Administrator /rp "nutanix/4u" /tn `
firstboot /tr D:\firstboot.bat
> shutdown /r /t 0

This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor
progress, log into the VM after the initial reboot and enter the command notepad C:\Program Files
\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this command
as desired to see an updated version of the log file.

Note: A d:\firstboot_fail file appears when this process fails. If that file is not present, the
process is continuing (if slowly).

Installing AHV (Nutanix NX Series Platforms)


Before you begin: Complete Installing a Hypervisor (Nutanix NX Series Platforms) on page 71.

1. Monitor the installation process.


Installation starts automatically after the AHV ISO is mounted and the node reboots. A welcome screen
appears and then status messages appear as the installation progresses. When installation completes,
the node shuts down.

Figure: Welcome (Install Options) Screen

2. Power on the node.

3. After the system reboots, log back into the IPMI console, go to the CDROM&ISO tab in the Virtual
Storage window, select the AHV ISO file, and click the Plug Out button (and then the OK button) to
unmount the ISO (see Installing a Hypervisor (Nutanix NX Series Platforms) on page 71).

What to do next: Install the Nutanix Controller VM and provision the hypervisor (see Attaching the
Controller VM Image (Nutanix NX Series Platforms) on page 77).

Attaching the Controller VM Image (Nutanix NX Series Platforms)


This procedure describes how to install the Nutanix Controller VM and provision the hypervisor on a single
node in a cluster in the field.
Before you begin: Install a hypervisor on the node (see Installing a Hypervisor (Nutanix NX Series
Platforms) on page 71).
To install the Controller VM (and provision the hypervisor) on a new or replacement node, do the following:

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 77


Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)

1. Verify you have access to the IPMI interface for the node.

a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 or 10 GbE port connection is not required for imaging the node.

b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 61.

2. Open a Web browser to the IPMI IP address of the node to be imaged.

3. Enter the IPMI login credentials in the login screen.


The default value for both user name and password is ADMIN (upper case).

Figure: IPMI Console Login Screen

The IPMI console main screen appears.

Note: The following steps might vary depending on the IPMI version on the node.

Figure: IPMI Console Screen

4. Select Console Redirection from the Remote Console drop-down list of the main menu, and then
click the Launch Console button.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 78


Figure: IPMI Console Menu

5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.

Figure: IPMI Remote Console Menu (Virtual Media)

6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type
field drop-down list, and click the Open Image button.

Figure: IPMI Virtual Storage Window

7. In the browse window, go to where the phoenix-version#_NOS-version#.iso file is located (see step 3 in
Installing a Hypervisor (Nutanix NX Series Platforms) on page 71), select that file, and then click the
Open button.

8. Click the Plug In button and then the OK button to close the Virtual Storage window.

9. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.
This causes the system to reboot using the selected Phoenix image. The Nutanix Installer screen
appears after rebooting.

Figure: IPMI Remote Console Menu (Power Control)

What to do next: Install the Controller VM by following Installing the Controller VM on page 83.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 79


Lenovo Converged HX Series Platforms

Attaching an ISO Image (Lenovo Converged HX Series Platforms)


This procedure describes how to attach an ISO image.
Before you begin: Prepare for installation by following Preparing the Controller VM ISO Image on
page 70.
To install a hypervisor on a new or replacement node in the field, do the following:

1. Verify you have access to the IMM interface for the node.

a. Connect the IMM port on that node to the network if it is not already connected.
A data network connection is not required for imaging the node.

b. Assign an IP address (static or DHCP) to the IMM interface on the node if it is not already assigned.
Refer to the Lenovo Converged HX series documentation for instructions on setting the IMM IP
address.

2. Open a Web browser to the IMM IP address of the node to be imaged.

3. Enter the IMM login credentials in the login screen.


The default values for User name/Password are USERID/PASSW0RD (upper case).

Figure: IMM Console Login Screen

The IMM console main screen appears.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 80


Figure: IMM Console Screen

4. Select Remote Control then click the Start remote control in single-user mode button.

Figure: IMM Console: Remote Control Screen

Click Continue in the Security Warning dialog box then Run in the Java permission dialog box that
appear.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 81


5. In the virtual console, select Virtual Media > Activate.

6. Select Virtual Media > Select Devices to Mount.


The Select Devices to Mount window appears.

7. Click the Add Image button, go to where the ISO image is located, select that file, and then click Open.

8. Check Mapped next to the ISO and click Mount Selected.

9. Select Tools > Power > Reboot (if the system is running) or On (if the system is turned off).
The system starts using the selected image.

Installing ESXi (Lenovo Converged HX Series Platforms)


Before you begin: Attach the hypervisor installation media (see Attaching an ISO Image (Lenovo
Converged HX Series Platforms) on page 80).

1. On the boot menu, select the installer and wait for it to load.

2. Press Enter to continue then F11 to accept the end user license agreement.

3. On the Select a Disk screen, select the approximately 100 GiB ServerRAID as the storage device, click
Continue, and then click OK in the confirmation window.

Figure: ESXi Device Selection Screen

4. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

5. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

6. Review the information on the Install Confirm screen and then click Install.
The installation begins and a dynamic progress bar appears.

7. When the Installation Complete screen appears, in the Video Viewer window click Virtual Media >
Unmount All and accept the warning message.

8. In the the Installation Complete screen select Reboot.


The system restarts.

What to do next: After the system reboots, you can install the Nutanix Controller VM and provision the
hypervisor (see Installing the Controller VM on page 83).
Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so can
cause the Controller VM install to fail.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 82


Installing AHV (Lenovo Converged HX Series Platforms)
Before you begin: Attach the hypervisor installation media (see Attaching an ISO Image (Lenovo
Converged HX Series Platforms) on page 80).

1. Monitor the installation process.


Installation starts automatically after the AHV ISO is mounted and the node reboots. A welcome screen
appears and then status messages appear as the installation progresses. When installation completes,
the node shuts down.

Figure: Welcome (Install Options) Screen

2. Power on the node.

3. After the system reboots, log back into the IMM console and unmount the AHV ISO. In the Video Viewer
window click Virtual Media > Unmount All and accept the warning message.

What to do next: Attach the Controller VM ISO image, install the Nutanix Controller VM, and configure
the hypervisor (see Attaching an ISO Image (Lenovo Converged HX Series Platforms) on page 80 and
Installing the Controller VM on page 83).

Installing the Controller VM


This procedure describes how to install the Nutanix Controller VM and provision the hypervisor on a single
node in a cluster in the field.
Before you begin: Install a hypervisor on the node and attach the Controller VM ISO to the system.
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)

To install the Controller VM (and provision the hypervisor) on a new or replacement node, do the following:

1. Do the following in the Nutanix Installer configuration screen:

a. Review the values in the upper eight fields to verify they are correct, or update them if necessary.
Only the Block ID, Node Serial, Node Position, and Node Cluster ID fields can be edited in this
screen. Node Position must be A for all single-node blocks.

b. Do one of the following in the next three fields (check boxes):


If you are imaging a new node (fully populated with drives and other components), select both
Configure Hypervisor (to provision the hypervisor) and Clean CVM (to install the Controller
VM).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 83


Note: You must select both to install the Controller VM; selecting Clean CVM by itself
will fail.
If you are imaging a replacement hypervisor boot drive or node (using existing drives), select
Configure Hypervisor only. This option provisions the hypervisor without installing a new
Controller VM.

Caution: Do not select Clean CVM if you are replacing a node or SATA DOM because
this option cleans the disks as part of the process, which means existing data will be lost.
If you are instructed to do so by Nutanix customer support, select Repair CVM. This option is for
repairing certain problem conditions. Ignore this option unless Nutanix customer support instructs
you to select it.

c. When all the fields are correct, click the Start button.

Figure: Nutanix Installer Screen

Installation begins and takes about 30 minutes.

2. After installation completes, unmount the ISO.


(Nutanix NX series) In the Virtual Storage window, click CDROM&ISO > Plug Out.
(Lenovo Converved HX series) In the Video Viewer window click Virtual Media > Unmount All and
accept the warning message.

3. At the reboot prompt in the console, type Y to restart the node.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 84


Figure: Installation Messages

On ESXi and AHV, the node restarts with the new image, additional configuration tasks run, and then
the host restarts again. Wait until this stage completes (typically 15-30 minutes depending on the
hypervisor) before accessing the node. No additional steps need to be performed.

Caution: Do not restart the host until the configuration is complete.

On Hyper-V, the node restarts, and a login prompt is displayed.

4. On Hyper-V, do the following:

a. At the login prompt, log in with administrator credentials.


A command prompt appears.

b. Schedule the firstboot script to run and restart the host.


> schtasks /create /sc onstart /ru Administrator /rp "nutanix/4u" /tn `
firstboot /tr D:\firstboot.bat
> shutdown /r /t 0

This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To
monitor progress, log into the VM after the initial reboot and enter the command notepad C:\Program
Files\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this
command as desired to see an updated version of the log file.
Note: A d:\firstboot_fail file appears if this process fails. If that file is not present, the
process is continuing (if slowly).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 85

You might also like