Professional Documents
Culture Documents
EMC VSPEX
Abstract
This document describes the EMC VSPEX Proven
Infrastructure solution for private cloud deployments with
EMC VPLEX/VE, VMware vSphere, and EMC
VNXe3200 Unified Storage System for up to 125 virtual
machines.
June, 2014
Copyright
USA.
Contents
1.
EXECUTIVE SUMMARY
1.1
Audience
1.2
Business Needs
2.
DOCUMENT PURPOSE
3.
SOLUTION OVERVIEW
4.
MOBILITY
10
5.
AVAILABILITY
12
6.
SOLUTION ARCHITECTURE
12
6.1
Overview
12
6.2
Solution configurations
14
7.
17
7.1
17
7.2
19
7.3
20
7.4
22
7.5
23
8.
SYSTEM REQUIREMENTS
26
8.1
26
8.2
27
8.3
Network requirements
27
8.4
Storage requirements
27
9.
PRE-DEPLOYMENT PLANNING
28
9.1
28
9.2
28
9.3
29
9.4
29
9.5
30
9.6
32
9.7
32
9.8
33
10.
33
10.1
Overview
33
10.2
OVF Deployment
33
10.3
35
10.4
ESXi Hosts
35
10.5
36
10.6
Security Certificates
38
10.7
38
10.8
39
10.9
40
10.10
41
11.
43
11.1
43
11.2
53
11.3
70
11.4
72
11.5
79
11.6
86
12.
87
13.
APPENDIX-B -- REFERENCES
91
13.1
EMC documentation
91
13.2
Other documentation
92
Table Of Figures
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Table Of Tables
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
1. Executive Summary
Modern business growth requires increased application availability for
continuous 24x7 operations. This requirement applies to large datacenter
scale workloads, as well as small single application workloads. By utilizing
storage virtualization technologies, companies are able to lower costs and
improve application availability and mobility. For smaller deployments,
where application availability is a requirement, EMC VPLEX Virtual Edition
Standard (VPLEX/VE) is a new storage virtualization product in the EMC
VPLEX family that provides increased availability and mobility to VMware
virtual machines and the applications they run.
This document provides up-front software and hardware material lists,
step-by-step guidance and worksheets, and verified deployment steps to
implement a VPLEX/VE solution with a VSPEX Private Cloud that is built off
the next generation EMC VNXe3200 Unified Storage System that supports
up to 125 virtual machines.
EMC VPLEX/VE is a unique virtualized storage technology that federates
data located on heterogeneous storage systems, allowing the storage
resources in multiple data centers to be pooled together and accessed
anywhere. Using the virtualization infrastructure provided by VMware
vSphere, VPLEX/VE enables you to use the AccessAnywhere feature, which
provides cache-coherent active-active access to data across two VPLEX/VE
sites within a geographical region, from a single management view.
VPLEX/VE requires four ESXi servers at each location. The servers will
host the vDirectors and other VPLEX/VE components. A vDirector is a Linux
virtual machine running the virtualization storage software. These servers
may also be used to run a portion or all of your Private Cloud, thus
eliminating the need to purchase additional servers. However, you can
leverage an existing virtualization infrastructure and the resource
management capabilities provided by VMware by importing the VPLEX/VE
servers into the same vCenter managing your data center. After you
deploy and configure VPLEX/VE, you can start to use its features from a
plug-in installed on the VMware vCenter Server. This plug-in is the single
gateway to all the functionalities of VPLEX/VE.
2. Document Purpose
This document is an initial introduction to using VPLEX/VE to leverage
VSPEX proven architecture, an explanation on how to modify the
architecture for specific engagements, and instructions on how to
effectively deploy and monitor the overall system.
This document applies to VSPEX deployed with EMC VPLEX/VE. The details
provided in this white paper are based on the following configurations:
ESXi Clusters are within 10 milliseconds (ms) of each other for VMware
HA.
3. Solution Overview
The EMC VSPEX Private Cloud for VMware vSphere coupled with EMC
VPLEX/VE represents the next-generation architecture for data mobility
and information access. This architecture is based on EMCs 20+years of
expertise in designing; implementing and perfecting enterprise class
intelligent cache and distributed data protection solutions. The combined
VSPEX VPLEX/VE solution provides a complete system architecture capable
of supporting up to 125 reference virtual machines with a redundant server
or network topology and highly available storage within or across
geographically dispersed datacenters. A reference virtual machine is
4. Mobility
EMC VPLEX/VE enables you to move data located on the storage arrays
non-disruptively. Using the VMware vMotion features, you can move the
application virtual machines that use the virtual volumes of VPLEX/VE
between two sites. VPLEX/VE simplifies your data center management and
eliminates outages when you migrate data or refresh technology. With
VPLEX/VE, you can:
Move your data and perform the technology refresh tasks using the twoway data exchange between locations.
10
Move data between sites, over distance, while the data remains online
and available during the move.
Move extents from a very busy storage volume shared by other busy
extents.
11
5. Availability
The AccessAnywhere feature of VPLEX/VE ensures cache-coherent activeactive access to data across VPLEX/VE sites. The features of VPLEX/VE
allow the highest possible resiliency in the event of a site outage. The data
is protected in the event of disasters or failure of components in your data
centers. With VPLEX/VE, the applications can withstand failures of storage
arrays and site components. The VPLEX/VE components are not disrupted
by a sequential failure of up to two vDirectors in a site. The failure of a
VPLEX/VE site or an inter-site partition is tolerated to the extent that the
site configured with site bias continues to access the storage infrastructure.
This essentially means that if a storage array is unavailable, another
storage array in the system continues to serve the I/O.
6. Solution Architecture
6.1 Overview
This VSPEX solution for VMware vSphere Private Cloud with EMC VNXe3200
Unified Storage System and EMC VPLEX/VE validates one configuration
with up to 125 virtual machines. The defined configuration forms the basis
of creating this custom solution.
VPLEX/VE leverages the virtualization capabilities of VMware vSphere,
which enables you to federate your data storage in a virtualized
environment. VPLEX/VE provides storage federation for operating systems
and applications that support clustered file systems in the virtual server
environments with VMware ESXi.
A VPLEX/VE deployment spans across two sites. Each site contains a vApp
that is made up of four VPLEX/VE director virtual machines and a
management server virtual machine.
Each vDirector must be deployed on a unique ESXi host. The vManagement
Server must also be installed on an ESXi host. The vManagement Server
can share an ESXi host with a vDirector. All of the ESXi hosts at both sites
must be members of the same stretched ESXi cluster. Each vDirector of a
VPLEX/VE system is stored on a virtual machine that is hosted on an ESXi
host.
12
All the ESXi hosts of a VPLEX/VE system must be part of one ESXi cluster.
VPLEX/VE is a virtualized software application; the physical components to
run the system are provided by the underlying VMware infrastructure.
The communication between the components of a VPLEX/VE system
happens through a set of virtual ports. These ports are created out of the
virtual network adapters of vDirectors and vManagement Server. A
vDirector in a VPLEX/VE site has virtual ports as follows:
Two ports for the back-end IP SAN interface that facilitates the
communication with the arrays to which it is connected.
Two ports for the front-end IP SAN interface that facilitates the
communication with hosts that initiate the I/O.
Two ports for the Intrasite interface that facilitates the communication
with other vDirectors in the same site.
Two ports for the Intersite interface that facilitates the communication
with vDirectors in the remote site.
One virtual port for communicating with the vManagement Server in the
other site. This port is also used for configuring the system through a
web browser.
13
14
Component
VMware vSphere
servers
Configuration
CPU
Memory
Network
Minimum
switching
capacity
2 physical switches
2 x 10 GbE ports per VMware
vSphere server
1 x 1 GbE port per storage
processor for management
2 ports per VMware vSphere
server, for storage network
2 ports per SP, for storage
data
15
Component
Configuration
2 x EMC VNXe3200
Shared
infrastructure
Software
Configuration
vCenter Server
Standard Edition
EMC VNXe
16
Software
Configuration
VNXe OE
3.0
EMC VPLEX/VE
VPLEX/VE
VPLEX/VE-2.1
This solution is built upon the Private Cloud solution highlighted in EMC
VSPEX Private Cloud: VMware vSphere 5.5 for up to 125 Virtual Machines.
Additional requirements for VPLEX/VE are listed below:
VMware servers:
VPLEX/VE will require 6 virtual CPU cores and 16GB of virtual memory
to support vDirector and vManagement Server operations on each
server.
Storage:
Additional details can be found in the EMC VPLEX/VE for VMware vSphere
Product Guide.
17
The VPLEX/VE vCenter Inventory module allows you to create and manage
distributed datastores, which are backed by VPLEX/VE distributed virtual
volumes. VPLEX/VE Actions are integrated into the vSphere Actions menu.
From here you can create new distributed datastores using a guided
wizard. You can manage the datastore capacity and migrate the backing
storage to a new array with no application downtime. From the vCenter
inventory, you see the distributed device backing for a distributed
datastore, as well as the two physical backing devices on the storage
arrays at each site.
18
The vCenter Server module includes automated health checks with event
and alarm management. Automated health checks ensure that the system
is configured and operating properly for high availability (HA). The
VPLEX/VE HA feature ensures that applications have optimal access to their
data through redundancy in management control paths and data I/O paths.
When errors are detected in the VPLEX/VE System or the surrounding
vSphere infrastructure, the system will generate vCenter events. Those
events and the associated alarms will appear in the vSphere client.
7.2 VPLEX/VE Configuration Manager Web Application
The VPLEX/VE Configuration Manager is a web application used to
configure your VPLEX/VE system. The Setup Wizard guides you through
the initial configuration settings for the system. You will configure
VPLEX/VE to the vCenter server, to the vSphere cluster, to the virtual
network, and to the storage arrays.
19
20
After the initial deployment, you will run the Setup Wizard to configure the
system. The VPLEX/VE vDirectors will be distributed across 4 different ESXi
hosts using 4 different datastores at each site. Note that the vManagement
Server can share an ESXi host with a vDirector.
This configuration provides the second aspect of the high availability
architecture; a single server failure will not cause a system outage.
VPLEX/VE can continue to provide data access with one vDirector alive per
site, and one vManagement Server alive for both sites.
Figure 6 - VPLEX/VE Initial OVF Deployment onto Four Hosts per Site
VMware vMotion is used to move the vDirector virtual machines during the
configuration process. The process will create DRS affinity rules to
21
22
23
24
vDS1 and vDS2 are used for WAN COM traffic for Site-1 and Site-2.
These switches should be connected to vNICs from all ESXi hosts that
are running VPLEX/VE components at both sites.
vDS3 and vDS4 are used for Site-1 Front-end SAN (ESXi-consumerfacing), Back-end SAN (Array-facing) and Local COM (VPLEX/VE IntraSite). These switches should be connected to vNICs from all the ESXi
hosts that are part of the site.
VDS 6 and vDS7 (not shown) are used for Site-2, with the same setup
as vDS3 & vDS4.
Each connection type requires two paths for redundancy and high
availability. In the illustration, you see two connections for communication
within the site (boxes are blue and green) and two connections for intersite or WAN COM (yellow and purple). This redundancy provides the third
important aspect of VPLEX/VE high availability; a single communication
path failure will not cause a system outage.
25
Different subnets are required for Front-end, Back-end and WAN COM
traffic.
8. System Requirements
This section describes the prerequisites for configuring and using
VPLEX/VE.
vSphere VMFS-5.
26
Firefox 24 or later.
Minimum Requirement:
o 1 iSCSI storage array per site.
o 4 iSCSI connections per array; 2 from each storage processor.
27
The current release supports the EMC VNXe3200 Unified Storage System.
See the EMC Simple Support Matrix for VPLEX at support.emc.com for
more information on supported storage array types.
9. Pre-deployment Planning
This section explains planning and preparation required prior to the
deployment.
Refer to the EMC VPLEX/VE Configuration Worksheet as you work through
this section. The worksheet is provided in a Microsoft Word format and is
available on http://support.emc.com. You can save a copy and enter your
configuration settings directly into the file. You can also print a copy and
write the settings down on paper. In either case, the worksheet is provided
so that you can capture the details of your environment as you step
through the planning tasks.
As you complete the steps below, collect the details on the worksheet.
When you start the deployment and configuration tasks, refer back to the
worksheet to make the process simple and easy.
28
Deploy to the cluster; vSphere will select an ESXi host on which to place
the vApp.
Deploy to a specific ESXi Host; note the ESXi host on the worksheet.
Note: Do not set the VMware HA admission control policy that configures
the ESXi hosts used for VPLEX/VE deployment as failover hosts. This
admission control policy restricts the ESXi hosts from participating in a
VPLEX/VE system.
Within a site, all 4 datastores must be visible to the 4 ESXi hosts that are
running vDirectors on that site. Resource pools are optional.
9.4 Networking requirements for deployment
The OVF deployment will use LAN and WAN resources to deploy the vApp.
The following choices are required:
29
The OVF deployment creates the vManagement Server for both sites and
creates the management IP connection on the LAN. For the vManagement
Server at each site:
Assign a gateway.
2 vDS for WAN COM, connected all ESXi hosts at both sites.
VLANs can be used to isolate network traffic over the virtual LAN.
2 VLAN IDs each for FE, BE, Local and WAN Traffic for Site-1.
2 VLAN IDs each for FE, BE, Local and WAN Traffic for Site-2.
Each COM type requires 2 connections on the IP network. For each site, for
4 vDirectors, assign 8 IP addresses:
30
Different subnets are required for Front-end, Back-end and WAN COM
traffic.
Site-1
1
2
Site-2
1
2
Pair
Member
A
B
A
B
A
B
A
31
vDirector-2-2-B
Mirror logging volumes across two or more back-end arrays, as they are
critical to recovery after the link is restored.
The certificate authority for both the sites can have a validity of 1 to 5
years.
The host certificate for a VPLEX/VE site can expire in 1-2 years.
EMC recommends that you use the same values for both sites.
32
Different subnets are required for Front-end, Back-end and WAN COM
(Inter-Site) networks.
VLANs are optional. If you use VLANs, ensure that all the ports for
connection 1 are assigned to one VLAN and all ports for Connection 2
assigned to another VLAN.
Overview
10.2
OVF Deployment
IP Address
User
33
Password
VPLEX/VE OVA file location
Cluster Name
Deploy Site-1
VPLEX/VE vApp Name
vApp Name
ESXi Host
(optional)
Deployment datastore
at least 240 GB free
capacity
Datastore
Resource Pool
Resource Pool
(optional)
VPLEX/VE Local
Custom Deployment
attributers for
vManagementServer
IP Address
Deploy Site-2
Field Name
vApp Name
ESXi Host
(optional)
Deployment datastore
at least 240 GB free
capacity
Deployment
Datastore
Resource Pool
Resource Pool
(optional)
VPLEX/VE Local
Customer LAN
Subnet Mask
Gateway
Value
Customer LAN
Custom Deployment
attributers for
vManagementServer
IP Address
Subnet Mask
Gateway
34
10.3
Value
IP Address
User ID
service
Password
Mi@Dim7T
vCenter Credentials
vCenter Server
Credentials
IP Address
Username
Password
Site-2
vManagement
Server IP
Site-2
vManagementServer
10.4
IP Address
ESXi Hosts
Site-1
Site-2
Site-1
Site-2
35
Select Datastores
Site-1
Site-2
10.5
Network Setup - Virtual Switches and Ports
Use the table below if the switch names are the same for all ESXi Hosts, in
the following cases:
Connection-1
Connection 2
Connection-1
Connection 2
Switch Name
Front-end IP SAN
(ESXi- facing)
Switch Name
Back-end IP SAN
(Array-facing)
Switch Name
VLAN ID(optional)
Switch Name
Front-end IP SAN
(ESXi- facing)
Switch Name
Back-end IP SAN
(Array-facing)
Switch Name
VLAN ID(optional)
36
Connection-1
Connection 2
Connection-1
Connection 2
Connection-1
Connection 2
Connection-1
Connection 2
Subnet Mask
Front-end IP Ports
(ESXi-facing)
VLAN ID(optional)
Use different subnets
for front-end, back- vDirector-1-1-A
end and WAN (Inter- vDirector-1-1-B
Site).
vDirector-1-2-A
Connection 1 & 2
vDirector-1-2-B
must be different
subnets.
37
Connection 1 & 2
must be different
subnets..
10.6
vDirector-2-2-B
Security Certificates
Passphrase
Passphrase
Expiration
Expiration
10.7
Passphrase
Expriation
1
2
3
4
Site-2
1
2
3
4
38
10.8
Volume Name
Volume Name
Array Name
Volume Name
Volume Name
Array Name
Volume Name
Array Name
Volume Name
39
10.9
(224.100.100.100)
(10000)
(11000)
Connection-1
Connection-2
(1500)
(1500)
Connection-1
Connection-2
Subnet Prefix
Subnet Mask
Site Address
MTU
Gateway
(optional)
vDirector-1-2-B
(224.100.100.100)
(10000)
40
network.
Default values are
listed in (Bold).
Listening Port
(11000)
Connection-1 Connection-2
Subnet Prefix
Subnet Mask
Site Address
MTU
(1500)
(1500)
Gateway
(optional)
Connection-1 Connection-2
vDirector-2-2-B
10.10
Value
Account and
This password allows you to log into
password for to log the Cluster Witness Server VM.
into ESXi server
where Cluster
Witness Server VM
is deployed
Host certificate
passphrase for the
Cluster Witness
certificate
41
Cluster Witness
requires
management IP
network to be
separate from
inter-cluster
network
Cluster Witness
functionality
requires these
protocols to be
enabled by the
firewalls configured
on the
management
network
42
Deploy the OVF Template for both Sites Run the OVF Deployment
Wizard in the vSphere Client.
Configure is Complete.
Note: For detailed installation instructions use the EMC VPLEX/VE for
VMware vSphere Product Guide.
11.1
This section describes the tasks required to deploy the VPLEX/VE vApp into
a VMware ESXi cluster.
1. Perform preliminary tasks.
2. Ensure the server, storage, and network requirements are met.
3. Deploy VPLEX/VE vApps for site1 and site2:
From vCenter Server using vSphere web client, select deploy OVF
Template.
43
In the Username field, type the user name for vCenter Server.
In the Password field, type the password for the vCenter Server.
Click Login.
To deploy the vApp on the cluster, from the left panel, right-click on
the cluster where you want to deploy VPLEX/VE, and select Deploy OVF
Template (Figure 10).
44
45
8. In the Accept EULAs screen, read the EULA and click Accept.
9. Click Next.
46
In the Name field, type a name for the vApp that you are deploying.
The name can contain up to 80 characters and it must be unique in the
inventory folder. The name of the vApp helps you identify the
deployment at a later stage. A sample name for a VPLEX/VE vApp is as
follows: VPLEXVE_Instance_Site1.
47
11. In the Select storage screen, select a datastore (that is visible to all the
ESXi hosts in Site-1 and has a capacity of 240 GB) to store the files of
the vApp. From the select virtual disk format drop-down, select a format
for the virtual disk. Select one of the following options (your selection
will not have any significant impact on the performance of VPLEX/VE):
48
to create the disk, but results in the best performance, even on the first
write to each block.
49
13. In the Customize template screen, provide the details for the
vManagement Server as follows:
Site Select the VPLEX/VE site where you want to deploy the OVA
template.
50
51
14. In the Ready to Complete screen, review the details that you have
provided for the deployment. To power on the virtual machine that
contains the VPLEX/VE vManagement Server, select the Power on after
deployment checkbox.
15. Click Finish to start the deployment process. The deployment dialog
box appears with the status of the vApp deployment. Depending on
your network, the deployment process can take a minimum of 10
minutes.
52
16. Repeat these steps for deploying the VPLEX/VE vApp on VPLEX/VE Site2. When you deploy the vApp on Site-2, ensure that you:
11.2
17. The vManagement Server is powered off, power it on manually. All the
vDirectors must be powered off during the VPLEX/VE configuration.
53
18. The VPLEX/VE Online Help provides detailed information about the
VPLEX/VE configuration using the Setup Wizard. Before starting the
Setup Wizard, ensure that you have filled in the Configuration
Worksheet. You can refer the worksheet to fill in the details that
required by the Setup Wizard.
54
21. The Setup Wizard has two phases. At the end of the first phase, you
must note the IQNs and pass it to your storage administrator. You must
do this before you start the second phase of the Setup Wizard.
22. To configure VPLEX/VE, you must connect the Setup Wizard to the
vCenter Server application on which you have configured your vSphere
cluster.
55
23. To configure the VPLEX/VE Site-2, you need the IP address of the
vManagementServer of Site-2.
24. To assign ESXi hosts to the VPLEX/VE sites, you need the list of the
ESXi hosts (four hosts in a VPLEX/VE site). These ESXi hosts require at
least 6 virtual CPU cores and 18 GB of virtual memory. You cannot
configure non-ESXi hosts to VPLEX/VE. Select the ESXi hosts, then
assign to sites with the >> key.
56
25. To assign the vDirectors to the ESXi hosts, you need the list of hosts
that you have assigned to the VPLEX/VE sites already.
57
26. The ESXi hosts that host the director virtual machines in a VPLEX/VE
site must share four datastores. When you deployed VPLEX/VE, you
selected a datastore (the deployment datastore) to stored the files of all
the ESXi hosts. The Setup Wizard enables you to select three more
datastores for the ESXi hosts in a site. All the ESXi hosts in a site must
share these datastores.
27. The deployment datastore is selected on the Setup Wizard already. You
cannot modify this selection. Each of the datastore that you select here
must have a minimum space of 40 GB.
58
vDS and vSS that have the same names on all the ESXi hosts.
30. To set up the virtual interfaces, you require the following information:
The name of the virtual switch and the VLAN ID (optional) for the
VPLEX/VE Local COM (Intra-Site) network.
The name of the virtual switch and the VLAN ID (optional) for the WAN
COM (Inter-Site) network.
The name of the virtual switch and the VLAN ID (optional) for the
Front-end IP SAN (ESXi- facing) network.
The name of the virtual switch and the VLAN ID (optional) for the Backend IP SAN (Array-facing) network.
59
60
31. The first part of the Setup Wizard enables you to configure the Frontend IP SAN (ESXi-facing) network ports of the vDirectors in VPLEX/VE
site 1. The network ports of each of the vDirector must be configured in
different subnets. Each connection of the network interface must also be
configured in separate subnets. To configure the IP network ports, you
need the IP addresses and the subnet mask details of these interfaces.
VLAN IDs are optional.
61
62
63
+
Figure 36 - Back-end IP configuration at site 2
64
35. Using the Setup Wizard, you can create security certificates that ensure
authorized access to the resources in a VPLEX/VE system. These are
self-signed certificates. Passphrases require a minimum of 8
alphanumeric characters. The certificate authority for both the sites can
have a validity of 1 to 5 years. The host certificate for a VPLEX/VE site
can expire in 1-2 years. EMC recommends you to use the same values
for both the sites.
65
36. Using the Setup Wizard, you can assign the storage arrays for the
operational storage for the system volumes in VPLEX/VE. Optionally,
you can add storage arrays that will be used for VPLEX/VE provisioning.
To add storage, you require:
66
38. Before you finish the first part of the Setup Wizard, review the settings
that are displayed on the screen. Take a screen print before you start
running the configuration. Running the configuration commands can
take up to 30 to 35 minutes. Do not close the Web browser until the
configuration is complete.
67
68
69
11.3
40. Before you can continue with the second part of the Setup Wizard, your
storage administrator must provision iSCSI storage from VNXe3200 to
the vDirector for the meta-volumes, backup meta-volumes and logging
volumes. Give the storage administrator the list of VPLEX/VE vDirector
back-end virtual port IQNs exported from Setup Wizard Part One above.
See Storage requirements section for details on the system volume
requirements.
In addition, the storage administrator can provision iSCSI storage for
VPLEX/VE to use as backing devices for distributed datastores. This
storage is provisioned to the vDirector back-end port IQNs. Although
the task can be completed at a later time, you will not be able to
provision distributed datastores until this task is completed.
To learn about iSCSI storage provisioning for EMC storage arrays, go to
the EMC Online Support Site https://support.emc.com.
70
71
11.4
41. The second part of the VPLEX/VE Setup Wizard enables you to
configure the system volumes and the Inter-site network.
42. Launch Phase II of VPLEX/VE EZ Setup wizard, refresh the browser to
vManagement Server of VPLEX/VE Site1.
43. The Setup Wizard enables you to review and modify the array
information that you have entered in the first part. You can also
rediscover the arrays that are not appearing on the screen after you
added them.
72
73
45. The storage for the meta-volume backup in a VPLEX/VE site must have
a minimum capacity of 20 GB. Select a minimum of two meta-volume
backup for a site. If you have multiple arrays, select the volumes from
two different arrays.
46. You can create a schedule for the meta-volume backup.
47. Logging volumes store the I/Os from the storage in the event of a site
downtime. The data in the logging volumes is used to synchronize the
sites after the site recovery. To create a logging volume for a RAID-1
device in a site, assign two volumes that have a capacity of 20 GB each.
For a single-extent device, assign one volume that has a capacity of 20
GB.
74
VLANs are optional. If you use VLANs, ensure that all the ports for
connection 1 is assigned to one VLAN and all ports for Connection 2
assigned to another VLAN.
Different subnets are required for Front-end, Back-end and WAN COM
(Inter-Site) networks.
Both bridged (L2) and routed (L3) networks are supported. The default
values work in most cases for a bridged network.
75
76
77
52. Before you finish this part of the Setup Wizard, review the settings that
are displayed on the screen.
53. After review, run configuration. Do not close the Web browser until the
configuration is complete.
78
11.5
54. After completing the configuration, you must configure the ESXi hosts
to consume the VPLEX/VE storage.
55. To use the VPLEX/VE features, you must connect the ESXi hosts to the
storage that is presented by VPLEX/VE. Before doing this, ensure that:
You have configured the product using the VPLEX/VE Setup Wizard
successfully.
The ESXi hosts that you want to connect to the VPLEX/VE storage
belong to the vSphere cluster where the VPLEX/VE system is running.
The ESXi hosts connect to the VPLEX/VE storage using only the built-in
software iSCSI initiator. The initiator must be enabled on each ESXi
host.
56. Connecting ESXi hosts to the VPLEX/VE storage involves the following
tasks:
79
80
81
63. On the left panel, under the vSphere cluster, navigate to the ESXi host.
64. Click the Manage tab on the top of the window.
65. Click Storage and then click Storage Adapters on the left.
66. In the list of storage adapters, under iSCSI Software Adapters, select
the software iSCSI initiator. Normally, the iSCSI initiator is named
vmhba32.
67. In the Storage Adapter Details section on the bottom, click Network
Port Binding.
68. Click + to add a new network port binding.
82
69. In the VMkernel network adapter screen, select the VMkernel adapter
for your VPLEX/VE Front-End Connection 1 network.
83
70. Repeat steps for binding the iSCSI initiator software to the VMkernel
adapters for the VPLEX/VE Front-End Connection 2 network.
71. To enable the iSCSI initiator to find the VPLEX/VE storage, you must
add the iSCSI send targets on the VPLEX/VE storage to the software
iSCSI initiator on the ESXi host.
72. In the Storage Adapter Details section on the bottom, click Targets.
73. Click Dynamic Discovery.
84
75. In the Add Send Target Server, type the IP address of a VPLEX/VE FE
port. Keep the default port number 3260 intact. Ensure that the Inherit
settings from parent checkbox is selected.
85
76. When you add a single port to the Dynamic Discovery, all the front-end
ports of the VPLEX/VE system are added to the static discovery
automatically.
11.6
86
The vSwitch that hosts the client VLANs is configured with sufficient
ports to accommodate the maximum number of virtual machines it
may host.
All required virtual machine port groups are configured, and each server
has access to the required VMware datastores.
An interface is configured correctly for vMotion using the material in the
vSphere Networking guide.
87
88
Check the configuration of the VLANs for the Fabric Interconnect (FI) ports
on the switches. This applies to both private and public VLANs. To
prevent disjoined network, assigned VLANs to the uplinks ports on the
switches from the FIs so the proper traffic would take the proper path
(public management and private iSCSI I/O). In our case, these paths exist
on two totally different switches.
The end to end iSCSI MTO should be the same. By default iSCSI uses
jumbo frames. They should be setup on the switches, servers, FI and VNXe
ports. The change helps to improve network performance end to end.
Confirm if the network drivers are compatible with ESXi version. Review
network vendor documentation for requirements.
To find the ENIC and FNIC version, login to the Host via SSH (enable SSH
first) and issue the following commands:
ENIC:
FNIC:
ethtool -I vmnic0
vmkload_mod -s fnic
89
shows the error, see Error! Reference source not found.. All operations
s functional except NDU will fail if there are no recent backups.
VPLEX/VE will support distances of up to 10ms RTT between datacenters,
but you must validate connectivity with vplexcli:
VPlexcli> connectivity validate-local-com
VPlexcli> connectivity validate-wan-com
VPlexcli> connectivity validate-be-com
VPlexcli> cluster status
90
When a new ESXi host is added, one manual step will be required after
adding the software iSCSI adapter in order to tell VPLEX/VE which site that
host is a part of (normally, this is handled during the initial configuration).
Once that is done, the plugin will automatically register its initiator and add
it to the storage view, so that provisioning will work for that host.
The command which you need to use is configuration virtual hosts-onphysical-sites add. Run a configuration virtual hosts-on-physical-sites list
first and you should see the listing as it stands today.
For VNXe, to register IQNs, simply add it to a storage group. The
Unisphere is different from other VNXs.
EMC documentation
91
Other documentation
92