Professional Documents
Culture Documents
Foundation 3.0
11-Aug-2016
Notice
Copyright
Copyright 2016 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: August 11, 2016 (2016-08-11 3:17:16 GMT-7)
2: Creating a Cluster................................................................................. 11
Discovering the Nodes.......................................................................................................................11
Defining the Cluster........................................................................................................................... 14
Setting Up the Nodes........................................................................................................................ 16
Selecting the Images......................................................................................................................... 17
Creating the Cluster...........................................................................................................................20
Configuring a New Cluster................................................................................................................ 22
6: Network Requirements......................................................................... 51
4
10: Troubleshooting...................................................................................63
Fixing IPMI Configuration Problems.................................................................................................. 63
Fixing Imaging Problems................................................................................................................... 64
Frequently Asked Questions (FAQ)...................................................................................................65
5
Release Notes
The Foundation release notes provide brief, high-level descriptions of changes, enhancements, notes,
and cautions as applicable to various releases of Foundation software and to the use of the software with
specific hardware platforms. Where applicable, the description includes a solution or workaround.
Foundation 3.0.6 is a patch release included with AOS 4.5.3. It resolves cluster creation issues
experienced when running Foundation 3.0.5 on a Controller VM. For the standalone imaging mode,
Foundation 3.1.1 remains the recommended version.
This release includes the following enhancements and changes:
Foundation supports NX Broadwell platforms.
On Broadwell and newer platforms, Controller VM vCPUs are assigned to a single NUMA node for
consistent performance.
The size of a NUMA node determines how many vCPUs are assigned to a Controller VM. [ENG-15638,
ENG-39813]
This release includes fixes for the following issues:
Controller VMbased Foundation fails when validating the network. The network validation progress
indicator remains at 0%. [ENG-52514]
Foundation fails during cluster creation if you specify a redundancy factor of 3. [ENG-50456]
Foundation fails if you attempt to install AHV and AOS 4.6 or later on a Nutanix cluster that is running a
version of AOS earlier than 4.6. The failure occurs when Foundation is passing control to the node that
it imaged first. [ENG-52129]
Foundation fails to create a cluster after imaging Dell XC series nodes with Hyper-V. [ENG-52593]
After the imaging process is complete, the cluster fails to start because the Foundation user interface
passes the node model as an imaging parameter. [ENG-50185]
The first-boot configuration script for ESXi does not set the active network interface card and port group
correctly if the nodes have both a 10GBase-T (Intel X540) card and a 10GBase SFP+ (Intel 82599)
card. Examples of hardware models that have both these cards are the NX-8150-G4 and NX-8150-
G5 platforms because they have an on-board 10GBase-T card and can accommodate an optional
10GBase SFP+ (dual-port or quad-port) as an add-on card.
You can work around the issue by doing one of the following:
Disconnect the onboard NIC that is connected to the local switch and then use Foundation.
Use Foundation 3.2.1.
[ENG-55973]
If you use the 3.0.2 VM image, you can skip this step.
The following notes apply to the use of Foundation 3.1.x with specific hardware platforms:
NX-3175-G4
One or more of the following issues result from the use of an unsupported 1000BASE-T Copper
SFP transceiver module (SFP-to-RJ45 adapter) when imaging NX-3175-G4 nodes:
Foundation times out with the following message in node logs: INFO: Populating firmware
information for device bmc...
Foundation fails at random stages.
Foundation cannot communicate with the baseboard management controller (BMC).
To avoid encountering these issues, use a supported SFP-to-RJ45 adapter. For information about
supported adapters, see KB2422.
Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix
hardware models with some restrictions. Click here (or log into the Nutanix support portal and
select Documentation > Compatibility Matrix from the main menu) for a list of supported
configurations. To check a particular configuration, go to the Filter By fields and select the
desired model, AOS version, and hypervisor in the first three fields and then set the last field to
Foundation. In addition, check the notes at the bottom of the table.
1. Download the required files, start the cluster creation GUI, and run discovery (see Discovering the
Nodes on page 11).
2. Define cluster parameters; specify Controller VM, hypervisor, and (optionally) IPMI global network
addresses; and (optionally) enable health tests after the cluster is created (see Defining the Cluster on
page 14).
4. Select the AOS and hypervisor images to use (see Selecting the Images on page 17).
5. Start the process and monitor progress as the nodes are imaged and the cluster is created (see
Creating the Cluster on page 20).
6. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster on
page 22).
Note: If you have nodes running a pre-4.5 version of AOS, you cannot use this (Controller VM-
based) method to create a cluster. Contact Nutanix customer support for help in creating the
cluster using the standalone (bare metal) method.
Your workstation must be connected to the network on the same subnet as the nodes you want to
image. (Foundation does not require an IPMI connection or any special network port configuration to
image discovered nodes.) See Network Requirements on page 51 for general information about the
network topology and port access required for a cluster.
Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP
address), and node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed
for installation.
1. Open a browser, go to the Nutanix support portal (see Downloading Installation Files on page 45),
and download the following files to your workstation.
FoundationApplet-offline.zip installation bundle from the Foundation download page.
Note: If you install from a workstation that has Internet access, you can forego downloading
this bundle and simply select the link to nutanix_foundation_applet.jnlp directly from the
support portal (see step 2). Otherwise, you must first download (and unpack) the installation
bundle.
nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page. This is the
installation bundle used for imaging the desired AOS release.
hypervisor ISO if installing Hyper-V or ESXi. (An AHV ISO is included with Foundation.) The user
must provide the supported Hyper-V or ESXi ISO (see Hypervisor ISO Images on page 49);
Hyper-V and ESXi ISOs are not available from the support portal.
Note: A security warning message may appear indicating this is from an unknown source.
Click the accept and run buttons to run the application.
3. Select (click the line for) a node to be imaged from the list and then click the Launch Foundation
button.
This launches the cluster creation GUI. The selected node will be imaged first and then be used to
image the other nodes. Only nodes with a status field value of Free can be selected, which indicates
Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that
are not part of a cluster) and then displays information about the discovered blocks and nodes in the
Discovered Nodes screen. (It does not display information about nodes that are powered off or in a
different subnet.) The discovery process normally takes just a few seconds.
Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.
To display just the available nodes, select the Show only new nodes option from the pull-down
menu on the right of the screen. (Blocks with unavailable nodes only do not appear, but a block with
both available and unavailable nodes does appear with the exclamation mark icon displayed for the
unavailable nodes in that block.)
To deselect nodes you do not want to image, uncheck the boxes for those nodes. Alternately, click
the Deselect All button to uncheck all the nodes and then select those you want to image. (The
Select All button checks all the nodes.)
Note: You can get help or reset the configuration at any time from the gear icon pull-down
menu (top right). Internet access is required to display the help pages, which are located in the
Nutanix support portal.
b. Click (check) the RF 3 button and then click the Save Changes button.
The window disappears and the RF button changes to Change RF (3), indicating the RF factor is
now set to 3.
6. Click the Next button at the bottom of the screen to configure cluster parameters (see Defining the
Cluster on page 14).
2. (optional) Click the Enable IPMI slider button to specify an IPMI address.
When this button is enabled, fields for IPMI global network parameters appear below. Foundation does
not require an IPMI connection, so this information is not required. However, you can use this option to
configure IPMI for your use.
c. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configurations, see Controller VM Memory and vCPU
Configurations on page 54.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations. See Controller VM Memory and vCPU
Configurations on page 54 for memory sizing recommendations.
g. Note: The following fields appear only if the IPMI button was enabled in the previous step.
i. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.
j. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password.
4. (optional) Click the Enable Testing slider button to run the Nutanix Cluster Check (NCC) after the
cluster is created.
The NCC is a test suite that checks a variety of health metrics in the cluster. The results are stored in
the ~/foundation/logs/ncc directory.
5. Click the Next button at the bottom of the screen to configure the cluster nodes (see Setting Up the
Nodes on page 16).
1. In the Hostname and IP Range section, do the following in the indicated fields:
a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should contain
only digits, letters, and hyphens.
b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes.
Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered address is
assigned to the Controller VM of the first node, and consecutive IP addresses (sequentially from
the entered address) are assigned automatically to the remaining nodes. Discovered nodes are
sorted first by block ID and then by position, so IP assignments are sequential. If you do not want
all addresses to be consecutive, you can change the IP address for specific nodes by updating the
address in the appropriate fields for those nodes.
Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.
d. IPMI IP (when enabled): Repeat the previous step for this field.
This sets the IPMI port IP addresses for all the nodes. This column appears only when IPMI is
enabled on the previous cluster setup screen.
2. In the Manual Input section, review the assigned host names and IP addresses. If any of the names or
addresses are not correct, enter the desired name or IP address in the appropriate field.
There is a section for each block with a line for each node in the block. The letter designation (A, B, C,
and D) indicates the position of that node in the block.
3. When all the host names and IP addresses are correct, click the Validate Network button at the bottom
of the screen.
This does a ping test to each of the assigned IP addresses to check whether any of those addresses
are being used currently.
If there are no conflicts (none of the addresses return a ping), the process continues (see Selecting
the Images on page 17).
If there is a conflict (one or more addresses returned a ping), this screen reappears with the
conflicting addresses highlighted in red. Foundation will not continue until the conflict is resolved.
Note: An AOS image may already be present (as in the above example). If one is and it is the
desired version, skip to the next step. If both the desired AOS and hypervisor (step 2) versions
are present, skip to step 3.
a. In the AOS (left) column, click the Upload Acropolis base software Tarball button and then click
the Choose File button.
b. In the file search window, find and select the AOS tar file downloaded earlier (see Discovering the
Nodes on page 11) and then click the Upload button.
Uploading an image file (AOS or hypervisor) may take some time (possibly a few minutes).
b. In the file search window, find and select the hypervisor ISO image downloaded earlier and then
click the Upload button.
Only approved hypervisor ISO images are permitted; Foundation will not image nodes with an
unapproved ISO image. To verify your ISO image is on the approved list, click the See Whitelist link.
Nutanix updates the list as new versions are approved, and the current version of Foundation may
not have the latest list. If your ISO does not appear on the list, click the Update the whitelist link to
download the latest whitelist from the Nutanix support portal.
c. [Hyper-V only] In the SKU (right) column, click the radio button for the Hyper-V version to use.
Three Hyper-V versions are supported: Free, Standard, Datacenter. This column appears only
when you select Hyper-V.
3. When both images are uploaded and ready, do one of the following:
To image the nodes and then create the new cluster, click the Create Cluster button at the bottom of
the screen.
To create the cluster without imaging the nodes, click the Skip Imaging button (in either case see
Creating the Cluster on page 20).
Note: The Skip Imaging option requires that all the nodes have the same hypervisor
and AOS version. This option is disabled if they are not all the same (with the exception of
any model NX-6035C "cold" storage nodes in the cluster that run AHV regardless of the
hypervisor running on the other nodes).
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. The selected node (see Discovering the Nodes on page 11) is imaged
first. When that imaging is complete, the remaining nodes are imaged in parallel. The imaging process
takes about 30 minutes, so the total time is about an hour (30 minutes for the first node and another 30
minutes for the other nodes imaged in parallel). You can monitor overall progress by clicking the Log
link at the top, which displays the service.log contents in a separate tab or window. Click on the Log
link for a node to display the log file for that node in a separate tab or window.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 21 nodes, add an extra 30 minutes processing time for each group of 20 nodes.
When installation moves to cluster creation, the status message displays the percentage complete and
current step. Cluster creation happens quickly, but this step could take some time if you enabled the
post-creation tests. Click on the Log link for a cluster to display the log file for the cluster in a separate
tab or window.
2. When processing completes successfully, either open the Prism web console and begin configuring the
cluster (see Configuring a New Cluster on page 22) or exit from Foundation.
The Foundation service shuts down two hours after imaging. If you go to the cluster creation success
page after a long absence and the Export Logs link does not work (or your terminal went to sleep
and there is no response after refreshing it), you can point the browser to one of the Controller VM IP
addresses. If the Prism web console appears, installation completed successfully, and you can get the
logs from ~/data/logs/foundation on the node that was imaged first.
Note: If nothing loads when you refresh the page (or it loads one of the configuration pages),
the web browser might have missed the hand-off between the node that starts imaging and the
first node imaged. This can happen because the web browser went to sleep, you closed the
browser, or you lost connectivity for some other reason. In this case, enter http://cvm ip for
any Controller VM, which should open the Prism GUI if imaging has completed. If this does not
work, enter http://cvm ip:8000/guion each of the Controller VMs in the cluster until you see
the progress screen, from which you can continue monitoring progress.
3. If processing does not complete successfully, review and correct the problem(s), and then restart the
process.
If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
Note: If an imaging problem occurs, it typically appears when imaging the first node. In that
case Foundation will not attempt to image the other nodes, so only the first node will be in an
unstable state. Once the problem is resolved, the first node can be re-imaged and then the
other nodes imaged normally.
1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.
a. Check the installed NCC version and update it if a later version is available (see the "Software and
Firmware Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to any
Controller VM in the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix support for assistance.
Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a
cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs
in a series, waiting until one has finished starting before proceeding to the next. See the Command
Reference for more information about using the nCLI.
3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).
5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse
feature (see the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide more informed
and proactive help.
6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring Alert
Policies" section).
7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster
elements, enable that feature (see the "Software and Firmware Upgrades" section).
Note: Allow access to the following through your firewall to ensure that automatic download of
updates can function:
*.compute-*.amazonaws.com:80
release-api.nutanix.com:80
9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
vCenter: See the Nutanix vSphere Administration Guide.
SCVMM: See the Nutanix Hyper-V Administration Guide.
Physically install the Nutanix cluster at your site. See the Physical Installation Guide for your model type
for installation instructions.
Set up the installation environment (see Preparing Installation Environment on page 25).
Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.
Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.
Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents
virtual media, such as CDROM. This could conflict with the foundation installation when it tries
to mount the virtual CDROM hosting the install ISO.
Have ready the appropriate global, node, and cluster parameter values needed for installation. The use
of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.
Note: If the Foundation VM IP address set previously was configured in one (typically public)
network environment and you are imaging the cluster on a different (typically private) network
in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on
page 25 to configure a new static IP address for the Foundation VM.
To image the nodes and create a cluster(s), do the following:
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
a. Download necessary files and prepare a workstation (see Preparing a Workstation on page 25).
b. Connect the workstation and nodes to be imaged to the network (Setting Up the Network on
page 29).
2. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on
page 31).
4. Select the images to use (see Configuring Image Parameters on page 38).
5. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring
Cluster Parameters on page 39).
6. Start the imaging process and monitor progress (see Monitoring Progress on page 41).
7. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see
Troubleshooting on page 63).
8. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After
Installation on page 44).
1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to
installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using
VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on
page 25).
2. Set up the network. The nodes and workstation must have network access to each other through a
switch at the site (see Setting Up the Network on page 29).
Preparing a Workstation
A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the
following:
Note: You can perform these steps either before going to the installation site (if you use a portable
laptop) or at the site (if you can connect to the web).
1. Get a workstation (laptop or desktop computer) that you can use for the installation.
The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk
space (preferably SSD), and a physical (wired) network adapter.
2. Go to the Foundation and AOS download pages in the Nutanix support portal (see Downloading
Installation Files on page 45) and download the following files to a temporary directory on the
workstation.
Foundation_VM_OVF-version#.tar. This tar file includes the following files:
Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version#
release, for example Foundation_VM-2.1.ovf.
Foundation_VM-version#-disk1.vmdk. This is the Foundation VM VMDK file for the version#
release, for example Foundation_VM-2.1-disk1.vmdk.
Note: Links to the VirtualBox files may not appear on the download page for every
Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)
nutanix_installer_package-version#.tar.gz. This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the support portal to download this file. (You can
download all the other files from the Foundation download page.)
Note: This assumes the tar command is available. If it is not, use the corresponding tar utility
for your environment.
4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.
Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM
VirtualBox.
8. Click the File option of the main menu and then select Import Appliance from the pull-down list.
9. Find and select the Foundation_VM-version#.ovf file, and then click Next.
11. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.
12. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).
13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,
install Oracle Additions as follows:
a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.
g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.
14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as
follows:
Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
address now and setting it again when the workstation is on a different (typically private)
network for the installation (see Imaging Bare Metal Nodes on page 24).
c. In the Select Action box in the terminal window, select Device Configuration.
Note: Selections in the terminal window can be made using the indicated keys only. (Mouse
clicks do not work.)
e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by
default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and
then click the OK button.
f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action
box.
This save the configuration and closes the terminal window.
16. If you intend to install ESXi or Hyper-V as the hypervisor, download the hypervisor ISO image into the
appropriate folder for that hypervisor.
ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx
Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv
Note: You must provide a supported ESXi or Hyper-V ISO image (see Hypervisor ISO Images
on page 49). You do not have to provide an AHV image because Foundation automatically
puts an AHV ISO into /home/nutanix/foundation/isos/hypervisor/kvm. However, if you want
to install a different version of AHV, download the ISO file from the Nutanix support portal (see
Downloading Installation Files on page 45).
1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN
interfaces of the nodes must be in failover mode (factory default setting).
The exact location of the port depends on the model type. See the hardware manual for your model to
determine the port location.
(Nutanix NX Series) The following figure illustrates the location of the network ports on the back of
an NX-3050 (middle RJ-45 interface).
2. Connect the installation workstation (see Preparing a Workstation on page 25) to the same 1 GbE
switch as the nodes.
1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.
Note: See Preparing Installation Environment on page 25 if Oracle VM VirtualBox is not
started or the Foundation VM is not running currently. You can also start the Foundation GUI by
opening a web browser and entering http://localhost:8000/gui/index.html.
The Global Configuration screen appears. Use this screen to configure network addresses.
Note: You can access help from the gear icon pull-down menu (top right), but this
requires Internet access. If necessary, copy the help URL to a browser with Internet access.
2. In the top section of the screen, enter appropriate values for the IPMI, hypervisor, and Controller VM in
the indicated fields:
Note: The parameters in this section are global and will apply to all the imaged nodes.
c. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.
d. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password as you type it.
j. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configurations, see Controller VM Memory and vCPU
Configurations on page 54.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations. See Controller VM Memory and vCPU
Configurations on page 54 for memory sizing recommendations.
3. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets,
check the Multi-Homing box in the bottom section of the screen.
When the box is checked, a line appears to enter Foundation VM virtual IP addresses. The purpose
of the multi-homing feature is to allow the Foundation VM to configure production IP addresses when
using a flat switch. Multi-homing assigns the Foundation VM virtual IP addresses on different subnets
so that you can use customer-specified IP addresses regardless of their subnet.
Enter unique IPMI, hypervisor, and Controller VM IP addresses. Make sure that the addresses
match the subnets specified for the nodes to be imaged (see Configuring Node Parameters on
page 34).
If this box is not checked, Foundation requires that either all IP addresses are on the same subnet or
that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.
4. Click the Next button at the bottom of the screen to configure the nodes to be imaged (see Configuring
Node Parameters on page 34).
Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes
to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition,
Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes, you
must first destroy the existing cluster in order for Foundation to discover those nodes.
2. To image additional (bare metal) nodes, click the Add Blocks button.
A window appears to add a new block. Do the following in the indicated fields:
b. Nodes per Block: Enter the number of nodes to add in each block.
All added blocks get the same number of nodes. To add multiple blocks with differing nodes per
block, add the blocks as separate actions.
The window closes and the new blocks appear at the end of the discovered blocks table.
a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned
automatically.
b. Position: Uncheck the boxes for any nodes you do not want to be imaged.
The value (A, B, and so on) indicates the node placement in the block such as A, B, C, D for a four-
node block. You can exclude the node in that block position from being imaged by unchecking the
appropriate box. You can check (or uncheck) all boxes by clicking Select All or (Unselect All) above
the table on the right.
c. IPMI MAC Address: For any nodes you added in step 2, enter the MAC address of the system
management interface in this field.
Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is
read-only for discovered nodes and displays a value of "N/A" for those nodes.)
Caution: Any existing data on the node will be destroyed during imaging. If you are using
the add node option to re-image a previously used node, do not proceed until you have
saved all the data on the node that you want to keep.
For Nutanix NX-series nodes, the MAC address of the IPMI interface normally appears on a
label on the back of each node. (Make sure you enter the MAC address from the label that starts
with "IPMI:", not the one that starts with "LAN:".) The MAC address appears in the standard
form of six two-digit hexadecimal numbers separated by colons, for example 00:25:90:D9:01:98.
For Lenovo Converged HX Series nodes, the MAC address of the IMM interface appears under IMM
LLA2 on a label that is attached to the power supply. Separate the last six pairs of characters with a
Note: If you are using a flat switch, the IP addresses must be on the same subnet as the
Foundation VM unless you configure multi-homing (see Configuring Global Parameters on
page 31).
To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP
address in that field.
To specify the IPMI addresses automatically, enter a starting IP address in the top line ("Start
IP address" field) of the IPMI IP column. The entered address is assigned to the IPMI port of
the first node, and consecutive IP addresses (starting from the entered address) are assigned
automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by
position, so IP assignments are sequential. If you do not want all addresses to be consecutive,
you can change the IP address for specific nodes by updating the address in the appropriate
fields for those nodes.
Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255
because such addresses are commonly reserved by network administrators.
Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.
h. NX-6035C : Check this box for any node that is a model NX-6035C.
Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs
are not allowed. NX-6035C nodes run AHV (and so will be imaged with AHV) regardless of what
hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 38).
4. To check which IP addresses are active and reachable, click Ping Scan (above the table on the right).
Note: When re-imaging a configured set of nodes using the same network configuration, failure
to ping indicates a networking issue.
5. Click the Next button at the bottom of the screen to select the images to use (see Configuring Image
Parameters on page 38).
The Node Imaging configuration screen appears. This screen is for selecting the AOS package and
hypervisor image to use when imaging the nodes.
1. Select the hypervisor to install from the pull-down list on the left.
The following choices are available:
ESX. Selecting ESX as the hypervisor displays the NOS Package and Hypervisor ISO Image fields
directly below.
Hyper-V. Selecting Hyper-V as the hypervisor displays the NOS Package, Hypervisor ISO Image,
and SKU fields.
Caution: Nodes must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-
V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on
page 57 for additional considerations when installing a Hyper-V cluster.
AHV. Selecting AHV as the hypervisor displays the NOS Package and Hypervisor ISO Image fields.
2. In the NOS Package field, select the NOS package to use from the pull-down list.
3. In the Hypervisor ISO Image field, select the hypervisor ISO image to use from the pull-down list.
Note: Click the Refresh hypervisor image link to display the current list of available images
in the ~/foundation/isos/hypervisor/[esx|hyperv] folder. If the desired hypervisor ISO
image does not appear in the list, you must download it to the workstation (see Preparing a
Workstation on page 25). Foundation automatically provides an ISO for AHV imaging in the
~/foundation/isos/hypervisor/kvm folder.
4. [Hyper-V only] In the SKU field, select the Hyper-V version to use from the pull-down list.
Three Hyper-V versions are supported: Free, Standard, Datacenter. This column appears only when
you select Hyper-V.
Note: See Hyper-V Installation Requirements on page 57 for additional considerations
when installing a Hyper-V cluster.
Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)
The Clusters configuration screen appears. This screen allows you to create one or more clusters and
assign nodes to those clusters. It also allows you to enable diagnostic and health tests after creating the
cluster(s).
1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the
Cluster Creation section at the top of the screen.
This section includes a table that is empty initially. A blank line appears in the table for the new cluster.
Enter the following information in the indicated fields:
c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.
Enter a comma separated list to specify multiple server addresses in this field (and the next two
fields).
d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.
You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable
or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.
Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active
Directory domain controller.
e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.
2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in
the Post Image Testing section.
Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes
several performance metrics on each node in the cluster. These metrics indicate whether the cluster
is performing properly. The results are stored in the ~/foundation/logs/diagnostics directory.
Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of
tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/
logs/ncc directory. (This test is available on NOS 4.0 or later. Checking the box does nothing on an
earlier NOS release.)
3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes
field to be included in that cluster.
A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes
to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot
be assigned to more than one cluster.
Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to
add to an existing cluster, which can be done through the web console or nCLI at a later time.
4. When all settings are correct, click the Run Installation button at the top of the screen to start the
installation process (see Monitoring Progress on page 41).
Monitoring Progress
Before you begin: Complete Configuring Cluster Parameters on page 39 (or Configuring Image
Parameters on page 38 if you are not creating a cluster).
Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)
When all the global, node, and cluster settings are correct, do the following:
Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation
process stops before imaging any of the nodes. To correct a port configuration problem, see
Fixing IPMI Configuration Problems on page 63.
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45
minutes. You can monitor overall progress by clicking the Log link at the top, which displays the
service.log contents in a separate tab or window. Click on the Log link for a node to display the log file
for that node in a separate tab or window.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.
3. If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
To remove the persistent information after an installation, go to a configuration screen and then click the
Reset Configuration option from the gear icon pull-down list in the upper right of the screen.
Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and
returns the Foundation environment to a fresh state.
1. Open a web browser and log in to the Nutanix Support portal: http://portal.nutanix.com.
2. Click Downloads from the main menu (at the top) and then select the desired page: AOS (NOS) to
download AOS files, Foundation to download Foundation files, or Phoenix to download Phoenix files.
3. To download a Foundation installation bundle (see Foundation Files on page 46), go to the
Foundation page and do one (or more) of the following:
To download the Java applet used in discovery (see Creating a Cluster on page 11), click link to
jnlp from online bundle. This downloads nutanix_foundation_applet.jnlp and allows you to start
discovery immediately.
To download an offline bundle containing the Java applet, click offline bundle. This downloads an
installation bundle that can be taken to environments which do not allow Internet access.
To download the standalone Foundation bundle (see Imaging Bare Metal Nodes on page 24), click
Foundation_VM-version#.ovf.tar. (The exact file name varies by release.) This downloads an
installation bundle that includes OVF and VMDK files.
To download an installation bundle used to upgrade standalone Foundation, click
foundation-version#.tar.gz (see Release Notes on page 6).
To download the current hypervisor ISO whitelist, click iso_whitelist.json.
Note: Use the filter option to display the files for a specific Foundation release.
4. To download an AOS release bundle, go to the AOS (NOS) page and click the button or link for the
desired release.
Clicking the Download version# button in the upper right of the screen downloads the latest
AOS release. You can download an earlier AOS release by clicking the appropriate Download
version# link under the ADDITIONAL RELEASES heading. The tar file to download is named
nutanix_installer_package-version#.tar.gz.
5. To download a Phoenix ISO image, go to the Phoenix page and click the file name link for the desired
Phoenix ISO image.
Note: Use the filter options to display the files for a specific Phoenix release and the desired
hypervisor type. Phoenix 2.1 or later includes support for all the hypervisors (AHV, ESXi, and
Hyper-V) in a single ISO while earlier versions have a separate ISO for each hypervisor type
(see Phoenix Files on page 47).
6. To download AHV, go to the Hypervisor Details page and click the file name link for the desired AHV
ISO file. AHV ISO files are named installer-el6.nutanix.version#.iso, where version# is the AHV
version.
Download the ISO file if you want to manually image each node by using the Phoenix procedure and
the AHV ISO file that is included with the standalone Foundation VM is not the version that you want.
For information about manually imaging nodes, see Appendix: Imaging a Node (Phoenix) on
page 69.
Foundation Files
The following table describes the files required to install Foundation. Use the latest Foundation version
available unless instructed by Nutanix customer support to use an earlier version.
Phoenix Files
The following table describes the Phoenix ISO files.
Note: Starting with release 2.1, Foundation no longer uses a Phoenix ISO file for imaging.
Phoenix ISO files are now used only for single node imaging (see Appendix: Imaging a Node
(Phoenix) on page 69) and are generated by the user from Foundation and AOS tar files. The
Phoenix ISOs available on the support portal are only for those who are using an older version of
Foundation (pre 2.1).
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.
iso_whitelist.json Fields
Name Description
The following is a sample entry from the whitelist for an ESXi 6.0 image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
}
You will need the following information during the cluster configuration:
Default gateway
Network mask
DNS server
NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the IP address
and port number of that server when enabling Nutanix support on the cluster.
New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:
IPMI interface
Hypervisor host
Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller
VMs and hypervisor hosts can be on this network, which must be isolated and protected.
The following Nutanix network port diagrams show the ports that must be open for supported hypervisors.
The diagrams also shows ports that must be opened for infrastructure services.
Platform Default
The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.
Nutanix Platforms
Dell Platforms
Lenovo Platforms
The following table lists the minimum amount of memory required when enabling features. The memory
size requirements are in addition to the default or recommended memory available for your platform
(Nutanix, Dell, Lenovo) as described in Controller VM Memory Configurations for Base Models. Adding
features cannot exceed 16 GB in additional memory.
Note: Default or recommended platform memory + memory required for each enabled feature =
total Controller VM Memory required
Platform Default
Requirements:
The primary domain controller version must at least be 2008 R2.
Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.
Active Directory Web Services (ADWS) must be installed and running. By default, connections are
made over TCP port 9389, and firewall policies must enable an exception on this port for ADWS.
To test that ADWS is installed and running on a domain controller, log on by using a domain
administrator account in a Windows host other than the domain controller host that is joined to the
same domain and has the RSAT-AD-Powershell feature installed, and run the following PowerShell
command. If the command prints the primary name of the domain controller, then ADWS is installed and
the port is open.
> (Get-ADDomainController).Name
If the free version of Hyper-V is installed, the primary domain controller server must not block
PowerShell remoting.
To test this scenario, log in by using a domain administrator account in a Windows host and run the
following PowerShell command.
> Invoke-Command -ComputerName (Get-ADDomainController).Name -ScriptBlock {hostname}
If the command prints the name of the Active Directory server hostname, then PowerShell remoting to
the Active Directory server is not blocked.
SCVMM
Requirements:
The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server 2012 or a
newer version.
The SCVMM server must allow PowerShell remoting.
To test this scenario, log in by using the SCVMM administrator account in a Windows host and run the
following PowerShell command on a Windows host that is different to the SCVMM host (for example,
run the command from the domain controller). If they print the name of the SCVMM server, then
PowerShell remoting to the SCVMM server is not blocked.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username
Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.
Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM
setup manually by using the SCVMM user interface.
The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the
following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username
Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.
The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to
False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}
If you are changing it from True to False, it is important to confirm that the policies that are on the
SCVMM host have the correct value. On the SCVMM host run rsop.msc to review the resultant set
of policy details, and verify the value by navigating to, Servername > Computer Configuration >
Windows Settings > Security Settings > Local Policies > Security Options: Policy Microsoft
network client: Digitally sign communications (always). The value displayed in RSOP must
be, Disabled or Not Defined for the change to persist. Also, the group policies that have been
configured in the domain to apply to the SCVMM server should to be updated to change this to
Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster.
In this case, it is important to ensure that the time remains synchronized between the Active
Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and the
Controller VMs set their NTP server as the Active Directory server, so it should be sufficient to
ensure that Active Directory domain is configured correctly for consistent time synchronization.
IP Addresses
DNS Requirements
Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added
to the DNS server during domain joining.
The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be
added to the DNS server when the storage cluster is joined to the domain.
The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server when the failover cluster is created.
After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the
SCVMM server (if applicable), or any other host that needs access to the Nutanix storage, for example,
a host running the Hyper-V Manager.
Any host that needs to access Nutanix storage must run at least Windows 8 or later version if it is a
desktop client, and Windows 2012 or later version if it is running Windows Server. This requirement is
because SMB 3.0 support is required for accessing Nutanix storage.
The IP address of the host must be whitelisted in the Nutanix storage cluster.
When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time,
ensuring that Nutanix services comes up fully in the Controller VM of the restarted host before updating
the next host. This can be accomplished by using Cluster Aware Updating and using a Nutanix-provided
script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-
update script ensures that the Nutanix services go down on only one host at a time ensuring availability
of storage throughout the update procedure. For more information about Cluster Aware Updating, see
the Nutanix Hyper-V Administration guide.
Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.
3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.
6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up window.
7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.
9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up
window.
10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network
gateway in the pop-up window.
11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup
mode.
1. Go to the Block & Node Config screen and review the problem IP address for the failed nodes (nodes
with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This
can help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and
the individual node log files for more detailed information.
2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button
at the top of the screen.
3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.
4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at
the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those
nodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this case
you must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 61).
1. See the individual log file for any failed nodes for information about the problem.
Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/
foundation.out[.timestamp]
Bare metal location for Foundation logs: /home/nutanix/foundation/log
3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a
time (see Appendix: Imaging a Node (Phoenix) on page 69).
Installation Issues
My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP
assignment will take some time.) If you get a complaint about authentication, double-check your
password. If the problem persists, try resetting the BMC.
Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.
The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not
complete (hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not
provide the performance necessary to run this test at a reasonable speed.
Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor>
and the install hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as the
first boot device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select
"restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the
nodes and retry the installation.
I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for
the call back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal
in the Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
I keep seeing the message "tar: Exiting with failure status due to previous errors'tar rf /home/
nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./
persisted_config.json' failed; error ignored."
[Hyper-V] I cannot reach the CVM console via ssh. How do I get to its console?
See KB article 1701 (https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008fJhCAI).
[ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it. See KB
article 1467 ( https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008dDxCAI).
I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM
on VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.
I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically
creates a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run
the following commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.
I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but
discovery is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6
link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in.
(If they are, the Controller VMs might choose to direct their traffic over that interface and never reach
your Foundation VM.) If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and
the 10G ports are connected.
1. Set up the installation environment (see Preparing the Controller VM ISO Image on page 70).
2. Attach the hypervisor ISO image (see Installing a Hypervisor (Nutanix NX Series Platforms) on
page 71).
4. Attach the Controller VM ISO image (see Attaching the Controller VM Image (Nutanix NX Series
Platforms) on page 77).
5. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM on
page 83).
Note: Imaging a node using Phoenix is restricted to Lenovo sales engineers, support engineers,
and partners.
1. Create the Controller VM ISO image (see Preparing the Controller VM ISO Image on page 70).
5. Install the Controller VM and provision the hypervisor (see Installing the Controller VM on page 83).
1. Prepare the installation environment as described in Preparing Installation Environment. (You do not
have to perform the steps described in Setting Up the Network, because you need the Foundation VM
only to create the images you need.)
3. Navigate to the /home/nutanix/foundation/nos directory and unpack the compressed AOS tar archive.
$ gunzip nutanix_installer_package-version#.tar.gz
Note: If either the tar or gunzip command is not available, use the corresponding tar or
gunzip utility for your environment.
Replace aos_tar_archive with the full path to the AOS tar archive,
nutanix_installer_package-version#.tar. The command creates a Phoenix ISO image in the current
directory. The Phoenix ISO is named phoenix-version#_NOS-version#.iso, and is the file to use when
Installing the Controller VM on page 83.
Caution: The node must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on a
node with less DOM capacity will fail.
To install a hypervisor on a new or replacement node in the field, do the following:
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
1. Verify you have access to the IPMI interface for the node.
a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 or 10 GbE port connection is not required for imaging the node.
b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 61.
4. Select Console Redirection from the Remote Console drop-down list of the main menu, and then
click the Launch Console button.
5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.
6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type
field drop-down list, and click the Open Image button.
7. In the browse window, go to where the hypervisor ISO image is located (/home/nutanix/foundation/
isos/hypervisor/kvm for AHV or wherever you downloaded the ESXi or Hyper-V ISO image), select
that file, and then click the Open button.
8. Click the Plug In button and then the OK button to close the Virtual Storage window.
What to do next: Complete installation by following the steps for the hypervisor:
Installing ESXi (Nutanix NX Series Platforms) on page 73
Installing Hyper-V (Nutanix NX Series Platforms) on page 74
Installing AHV (Nutanix NX Series Platforms) on page 77
1. Click Continue at the installation screen and then accept the end user license agreement on the next
screen.
2. On the Select a Disk screen, select the SATADOM as the storage device, click Continue, and then click
OK in the confirmation window.
Note: The root password must be nutanix/4u or the installation will fail.
5. Review the information on the Install Confirm screen and then click Install.
6. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the Plug
Out button, and then return to the Installation Complete screen and click Reboot.
What to do next: After the system reboots, you can install the Nutanix Controller VM and provision the
hypervisor (see Installing the Controller VM on page 83).
Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so can
cause the Controller VM install to fail.
2. At the Press any key to boot from CD or DVD prompt, press SHIFT+F10.
A command prompt appears.
c. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk
and then run the clean command:
select disk number
clean
d. Create and format a primary partition (size 1024 and file system fat32).
create partition primary size=1024
select partition 1
format fs=fat32 quick
f. Assign the drive letter "C" to the DOM install partition volume.
list volume
list partition
This displays a table of logical volumes and their associated drive letter, size, and file system type.
Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which
is the DOM install partition) is drive letter "C", go to the next step.
Otherwise, do one of the following:
If drive letter "C" is assigned currently to another volume, enter the following commands to
remove the current "C" drive volume and reassign "C" to the DOM install partition volume:
select volume cdrive_volume_id#
remove
select volume dom_install_volume_id#
assign letter=c
If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the
DOM install partition volume:
select volume dom_install_volume_id#
assign letter=c
b. In the language selection screen that reappears, again just click the Next button.
c. In the install screen that reappears click the Install now button.
d. In the operating system screen, select Windows Server 2012 Datacenter (Server Core
Installation) and then click the Next button.
f. In the type of installation screen, select Custom: Install Windows only (advanced).
g. In the where to install screen, select Partition 2 (the NTFS partition) of the DOM disk you just
formatted and then click the Next button.
Ignore the warning about free space. The installation location is Drive 6 Partition 2 in the example.
5. After Windows boots up, click Ctrl-Alt-Delete and then log in as Administrator when prompted.
7. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM on
page 83).
This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor
progress, log into the VM after the initial reboot and enter the command notepad C:\Program Files
\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this command
as desired to see an updated version of the log file.
Note: A d:\firstboot_fail file appears when this process fails. If that file is not present, the
process is continuing (if slowly).
3. After the system reboots, log back into the IPMI console, go to the CDROM&ISO tab in the Virtual
Storage window, select the AHV ISO file, and click the Plug Out button (and then the OK button) to
unmount the ISO (see Installing a Hypervisor (Nutanix NX Series Platforms) on page 71).
What to do next: Install the Nutanix Controller VM and provision the hypervisor (see Attaching the
Controller VM Image (Nutanix NX Series Platforms) on page 77).
1. Verify you have access to the IPMI interface for the node.
a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 or 10 GbE port connection is not required for imaging the node.
b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 61.
Note: The following steps might vary depending on the IPMI version on the node.
4. Select Console Redirection from the Remote Console drop-down list of the main menu, and then
click the Launch Console button.
5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.
6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type
field drop-down list, and click the Open Image button.
7. In the browse window, go to where the phoenix-version#_NOS-version#.iso file is located (see step 3 in
Installing a Hypervisor (Nutanix NX Series Platforms) on page 71), select that file, and then click the
Open button.
8. Click the Plug In button and then the OK button to close the Virtual Storage window.
9. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.
This causes the system to reboot using the selected Phoenix image. The Nutanix Installer screen
appears after rebooting.
What to do next: Install the Controller VM by following Installing the Controller VM on page 83.
1. Verify you have access to the IMM interface for the node.
a. Connect the IMM port on that node to the network if it is not already connected.
A data network connection is not required for imaging the node.
b. Assign an IP address (static or DHCP) to the IMM interface on the node if it is not already assigned.
Refer to the Lenovo Converged HX series documentation for instructions on setting the IMM IP
address.
4. Select Remote Control then click the Start remote control in single-user mode button.
Click Continue in the Security Warning dialog box then Run in the Java permission dialog box that
appear.
7. Click the Add Image button, go to where the ISO image is located, select that file, and then click Open.
9. Select Tools > Power > Reboot (if the system is running) or On (if the system is turned off).
The system starts using the selected image.
1. On the boot menu, select the installer and wait for it to load.
2. Press Enter to continue then F11 to accept the end user license agreement.
3. On the Select a Disk screen, select the approximately 100 GiB ServerRAID as the storage device, click
Continue, and then click OK in the confirmation window.
4. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.
Note: The root password must be nutanix/4u or the installation will fail.
6. Review the information on the Install Confirm screen and then click Install.
The installation begins and a dynamic progress bar appears.
7. When the Installation Complete screen appears, in the Video Viewer window click Virtual Media >
Unmount All and accept the warning message.
What to do next: After the system reboots, you can install the Nutanix Controller VM and provision the
hypervisor (see Installing the Controller VM on page 83).
Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so can
cause the Controller VM install to fail.
3. After the system reboots, log back into the IMM console and unmount the AHV ISO. In the Video Viewer
window click Virtual Media > Unmount All and accept the warning message.
What to do next: Attach the Controller VM ISO image, install the Nutanix Controller VM, and configure
the hypervisor (see Attaching an ISO Image (Lenovo Converged HX Series Platforms) on page 80 and
Installing the Controller VM on page 83).
To install the Controller VM (and provision the hypervisor) on a new or replacement node, do the following:
a. Review the values in the upper eight fields to verify they are correct, or update them if necessary.
Only the Block ID, Node Serial, Node Position, and Node Cluster ID fields can be edited in this
screen. Node Position must be A for all single-node blocks.
Caution: Do not select Clean CVM if you are replacing a node or SATA DOM because
this option cleans the disks as part of the process, which means existing data will be lost.
If you are instructed to do so by Nutanix customer support, select Repair CVM. This option is for
repairing certain problem conditions. Ignore this option unless Nutanix customer support instructs
you to select it.
c. When all the fields are correct, click the Start button.
On ESXi and AHV, the node restarts with the new image, additional configuration tasks run, and then
the host restarts again. Wait until this stage completes (typically 15-30 minutes depending on the
hypervisor) before accessing the node. No additional steps need to be performed.
This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To
monitor progress, log into the VM after the initial reboot and enter the command notepad C:\Program
Files\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this
command as desired to see an updated version of the log file.
Note: A d:\firstboot_fail file appears if this process fails. If that file is not present, the
process is continuing (if slowly).