Professional Documents
Culture Documents
Page i
STORING THE FUTURE
Contents
Introduction....................................................................................................... 1
ZEROING THE VOLUMES METADATA ..................................................................................................................... 1
Page ii
STORING THE FUTURE
Introduction
The procedures in this document describe the configuration steps required to configure the
INFINIDAT InfiniBox for use with EMC VPLEX, a virtual storage technology that connects to
multiple storage arrays, allowing for data migration and mirroring across sites.
METADATA VOLUMES
Metadata volumes are critical to the proper function of the VPLEX system. VPLEX Meta Data
Volumes, or Meta Volumes, contain information about devices, physical-to-virtual device
mappings and other internal system configuration data. The importance of the information on
these volumes justifies a high level of Meta Volume data redundancy. Meta Volumes are
provisioned as RAID 1 along with a minimum of two additional point-in-time copies (one 24
hours old, the other 48 hours old). It is highly recommended that Meta Volumes RAID 1
members be stored on two physically separate storage arrays, using array-provided RAID
protection for each member.
LOGGING VOLUMES
A logging volume is dedicated capacity for tracking any blocks written to a cluster. A logging
volume is a required prerequisite to creating a distributed device and a remote device. Logging
volumes keep track of any blocks written during inter-cluster link failure. The system uses the
information in logging volumes to synchronize the distributed devices by sending only changed
block regions across the link.
VIRTUAL VOLUMES
At the top layer of the VPLEX storage structures are virtual volumes. Virtual volumes are the
elements VPLEX exposes to hosts using its front-end (FE) ports. Access to virtual volumes is
Page 1
STORING THE FUTURE
controlled using storage views. They act as logical containers determining host initiator access
to VPLEX FE ports and virtual volumes.
Page 2
STORING THE FUTURE
Zoning configuration
InfiniBox provisioning
Zoning configuration
Zone the InfiniBox storage array to the VPLEX back-end ports. Follow the recommendations in
the “Implementation and Planning Best Practices for EMC VPLEX Technical Notes”.
Note: To ensure high data availability, present the each node of the storage array to each
director of the VPLEX along separate physical paths.
The general rule is to use a configuration that provides the best combination of simplicity and
redundancy. For back-end Storage connectivity the recommended SAN topology is a dual SAN
fabric design to supply redundant and resilient inter-hardware connectivity.
Page 3
STORING THE FUTURE
Each director in a VPLEX cluster must have a minimum of two paths to every local back-
end storage array and to every storage volume presented to VPLEX.
InfiniBox contains three or more independent interconnected nodes. Each node should
have a minimum of two ports connected to the VPLEX back-end ports via physically
separate SAN fabrics.
When configuring mirroring or migration across arrays, it is suggested that each array
be accessed through different back-end director ports
ZONING RECOMMENDATIONS
Physical connectivity
Logical zoning
Zone VPLEX director A-00 ports to Port 1 of InfiniBox Node 1 and Node 2
Zone VPLEX director B ports to one group of Port 5 on each InfiniBox Nodes.
Map Volumes to allow access to the appropriate VPLEX initiators for each port groups.
InfiniBox provisioning
Hosts, and then clusters must be created on InfiniBox in order to map provisioned storage
volumes. Hosts are groupings of initiators that are associated to a physical host, and clusters
are user defined as a grouping of those hosts. Each zoned initiator for the VPLEX Engine should
be grouped into a single Host. These host engines should be created into a VPLEX cluster.
Page 5
STORING THE FUTURE
Once created, storage volumes can be mapped to all grouped initiators of a given connected
host. This section describes host/cluster creation, volume creation and then volume to cluster
mapping.
Creating a host
Creating a cluster
Creating volumes
Mapping volumes to clusters
CREATING A HOST
Suggestions for friendly host names are ones that describe the host being created. For
example, if creating a host for VPLEX Cluster 1 Engine 1, one might enter Plex-C1E1. Using
names that help identify the initiators facilitates maintenance and lifecycle activities.
Step 1 On the InfiniBox GUI, click the Hosts & Clusters button on the toolbar on the left.
CREATING A CLUSTER
Step 1 On the InfiniBox GUI, click the Hosts & Clusters button on the toolbar on the left.
Page 6
STORING THE FUTURE
Page 7
STORING THE FUTURE
CREATING A POOL
Step 1 On the InfiniBox GUI, click the Pools button on the toolbar on the left.
Optionally, click the Advanced button to change the default values of more of the
pool’s settings.
Click Create. The pool is created.
Page 8
STORING THE FUTURE
CREATING A VOLUME
Step 1 On the InfiniBox GUI, click the Volumes button on the toolbar on the left.
The Volumes screen opens.
-OR-
Right-click the pool and select Create Volume from the menu.
Click Create. The volume is created. In our example, 10 volumes were created and
they are available on the Volumes screen:
Page 9
STORING THE FUTURE
Page 10
STORING THE FUTURE
VPLEX Provisioning
In order to present devices to hosts, there are a number of steps to follow when provisioning
storage on the VPLEX:
LUNs created on the InfinBox are mapped to the VPLEX ports. Appropriate zoning
must be configured on the fibre channel switch that is attached to both devices.
VPLEX is configured to claim the mapped LUNs. Extents are created on the claimed
LUNs.
Stripes, mirrors or concatenated (RAID 0,1, and C geometries respectively) devices can
be provisioned by combining the created extents depending on application
performance/resilience and capacity requirements. Additionally encapsulated (1:1
mapped) devices can be created when claimed LUN data is required to be preserved
and ‘imported’ into the VPLEX
The aforementioned device raid geometries can be spanned across VPLEX clusters to
provide geographically diverse VPLEX raid configurations
Virtual device are created from these device types and are then exported to connected
hosts.
Page 11
STORING THE FUTURE
Step 4 Cut and paste the command output and save it to a file on the management
server.
Step 5 Each claimed lun needs a unique name – preselect a unique string that will help
identify LUNs to be claimed. Names:
Can only begin with an underscore or a letter
Can only contain letters numbers hyphens or underscores for remaining
characters
Cannot exceed 58 characters
Should end in an underscore
Cannot end in a hyphen
Examples:
InfiniBox_20140101
InfiniBox_aa3721_
Page 12
STORING THE FUTURE
Where:
file1 is the name of the file you saved the storage volume output to
claim_name is the unique name you selected for the luns to be claimed as
filename.txt is a name that you will use during the claiming wizard step.
Edit filename.txt to add the phrase Generic storage-volumes to the very top of the
file.
TIP: The Linux based VPLEX management console includes vim which can be
used to create and edit files text files.
Step 7 Enter the following command to claim the LUNs using the VPLEX claimingwizard.
Example:
service@VPLEX01:/tmp> vplexcli
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Password:
creating
logfile:/var/log/VPlex/cli/session.log_service_localhost_T10175_201502051
90610
Page 13
STORING THE FUTURE
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
claimingwizard -f /tmp/NFINIDAT.txt -c cluster-1
Found unclaimed storage-volume VPD83T3:6742b0f0000004280000000000005cb1
vendor NFINIDAT : claiming and naming NFINIDAT_volume_4.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll
Name VPD83 ID
Capacity Use Vendor IO Type Thin VIAS
---------------------------------------- -------------------------------
--------- -------- --------- -------- Status ----------- Rebuild
Based
---------------------------------------- -------------------------------
--------- -------- --------- -------- ------ ----------- ------- -
----
NFINIDAT_volume_1
VPD83T3:6742b0f0000004280000000000005cae 2G claimed NFINIDAT
alive normal false false
NFINIDAT_volume_2
VPD83T3:6742b0f0000004280000000000005caf 2G claimed NFINIDAT
alive normal false false
NFINIDAT_volume_3
VPD83T3:6742b0f0000004280000000000005cb0 2G claimed NFINIDAT
alive normal false false
NFINIDAT_volume_4
VPD83T3:6742b0f0000004280000000000005cb1 2G claimed NFINIDAT
alive normal false false
Create a meta-volume
As discussed, VPLEX requires four LUNs (min 78GB) for metadata volumes.
Page 14
STORING THE FUTURE
Step 3 Use the meta-volume create command to create a new meta-volume. The syntax
for the command is:
meta-volume create --name meta-volume_name --storage-volumes storage-
volume_1,storage-volume_2,storage-volume_3
Where:
meta-volume_name is a name assigned to the meta-volume.
storage-volume_1 is the VPD (vital product data) name of the meta-
volume.
storage-volume_2 is the VPD name of the mirror.
The mirror can consist of multiple storage volumes (which will become a RAID 1),
in which case you would include each additional volume, separated by commas.
The meta-volume and mirror must be on separate arrays, and should be in
separate failure domains. This requirement also applies to the mirror volume and
its backup volume.
Note: Storage volumes must be unclaimed and on different arrays.
VPlexcli:/> meta-volume create --name c1_meta --storage-volumes
VPD83T3:6742b0f00000042800000000000118d2,
VPD83T3:6742b0f00000042800000000000118d3
This may take a few minutes...
Meta-volume c1_meta is created at /clusters/cluster-1/system-volumes.
Step 4 Use the ll command to display the new meta-volume’s status, verify that the
attribute active shows a value of true.
VPlexcli:/clusters/cluster-1/system-volumes> ll c1_meta
/clusters/cluster-1/system-volumes/c1_meta:
Attributes:
Name Value
Page 15
STORING THE FUTURE
---------------------- ------------
active true
application-consistent false
block-count 23592704
block-size 4K
capacity 90G
component-count 2
free-slots 64000
geometry raid-1
health-indications []
health-state ok
locality local
operational-status ok
ready true
rebuild-allowed true
rebuild-eta -
rebuild-progress -
rebuild-status done
rebuild-type full
slots 64000
stripe-depth -
system-id c1_meta
transfer-size 128K
vias-based false
volume-type meta-volume
Contexts:
Name Description
---------- -------------------------------------------------------------
------
components The list of components that support this device or system
virtual
volume.
VPlexcli:/clusters/cluster-1/system-volumes/c1_meta> ll components/
/clusters/cluster-1/system-volumes/c1_meta/components:
Name Slot Type
Operational Health Capacity
---------------------------------------- Number -------------- Status
Page 16
STORING THE FUTURE
State --------
---------------------------------------- ------ -------------- -------
---- ------ --------
VPD83T3:6742b0f00000042800000000000118d2 0 storage-volume ok
ok 90G
VPD83T3:6742b0f00000042800000000000118d3 1 storage-volume ok
ok 90G
Use the ll command to display the new meta-volume’s status, verify that the
attribute active shows a value of true.
Step 3 Create the logging volume. The syntax for the command is:
logging-volume create --name name --geometry [raid-0 |raid-1] --extents
context-path --stripe-depth
Where:
--name - The name for the new logging volume
--geometry - Valid values are raid-0 or raid-1
--extents - Context paths to one or more extents to use to create the
logging volume.
--stripe-depth - Required if --geometry is raid-0. Strip depth must be:
greater than zero, but not greater than the number of blocks of the
smallest element of the RAID 0 device being created and a multiple of 4 K
bytes.
extents extent_se-logging-source01_1,extent_se-logging-source02_1
Logging-volume 'c1-logging-volume_vol' is created at /clusters/cluster-
1/system-volumes.
VPlexcli:/clusters/cluster-1/system-volumes> ll
Name Volume Type Operational Health
Active Ready Geometry Component Block Block Capacity Slots
------------------------------- -------------- Status State ---
--- ----- -------- Count Count Size -------- -----
------------------------------- -------------- ----------- ------ ---
--- ----- -------- --------- -------- ----- -------- -----
c1-logging-volume_vol logging-volume ok ok -
- raid-1 2 262560 4K 1G -
Page 17
STORING THE FUTURE
VPlexcli:/clusters/cluster-1/system-volumes/c1-logging-volume_vol> ll
components/
/clusters/cluster-1/system-volumes/c1-logging-volume_vol/components:
Name Slot Type Operational Health
Capacity
------------------------- Number ------ Status State --
------
------------------------- -------- ------ -------------- -------- --
------
extent_se-logging-source01_1 0 extent ok ok
1G
extent_se-logging-source02_1 1 extent ok ok
1G
On a cluster, click on Storage Array, select the array and then “Show Logical Units”. These are
the devices that the cluster can see; ensure that the cluster can see the LUNs you intend to use
to create your devices.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003435 -n se-oraredo-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003436 -n se-oradata-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
Page 18
STORING THE FUTURE
Page 19
STORING THE FUTURE
Load balancing across arrays: In this use case, there are multiple arrays behind VPLEX.
Either because of capacity reasons or performance reasons or the need for some
specific capability, volumes need to be moved from one array to another. Both arrays
continue to be kept in service after the volume moves are complete.
Available operations:
Extent - performs intra-cluster move of data from one extent to another.
Device - performs intra-cluster move of data from one device to another.
Batch - a CLI only option that groups extent or device mobility jobs into a batch job.
Migration procedure
1. Create a batch migration plan. A plan is a file that identifies the source and target
devices and other attributes.
2. Check the plan and then start the migration session.
3. Verify the status of the migration.
4. Verify that the migration has completed. When the migration completes the
percentage done will show 100.
5. Once the synchronization completes, the migration session can be committed.
6. Clean up the migration. This dismantles the source device down to the storage volume
and the source storage device is changed to an unclaimed state.
7. Remove all information about the migration session from the VPLEX.
8. Post-Migration task, depends if you want to redeployed the devices for other uses in
the VPLEX or if the source storage system needs to be removed by performing the
necessary masking, zoning, and other configuration changes.
Page 20
STORING THE FUTURE
Migration Steps
Initial Host writing I/Os to VPLEX virtual volume.
state
Page 21
STORING THE FUTURE
Step 3 VPLEX ensures that the volumes on the two arrays are in sync. Host READ I/Os are
directed to the source leg. Host WRITE I/Os are sent to both legs of the mirror.
After both volumes are in complete sync, I/Os continues until you decide to
disconnect the source volume. Even after the volumes are in sync, you have the
option to remove the destination volume and go back to the source.
Step 4 Once volumes are in sync, disconnect the source volume / array.
From the host standpoint, quite literally, it does not know that anything has
changed.
Page 22
STORING THE FUTURE
Step 3 Select the device that you want to mirror and then click next.
Page 23
STORING THE FUTURE
Step 4 On the next screen select each source and target device. Click both devices and
Add Mirror.
Step 5 Click next to synchronize data, which will bring you to the consistency group page.
At this time you can choose to create a new group, add to an existing group or no
group at all. We will create a new Consistency Group at this time.
Page 24
STORING THE FUTURE
Page 25
STORING THE FUTURE
Step 6 If you check Distributed Devices now, you will see your newly created mirrored
device.
Page 26
STORING THE FUTURE
Step 7 You’ll notice that you have an “unexported” tag under the service status. This
means that the device has not yet been masked to an initiator and therefore now
storage views exist for this volume.
Page 27
STORING THE FUTURE
Step 8 If you go back to Cluster-1 and then click on Storage Views. You’ll see that there
already exists a view that includes the initiator as well as the ports on the Vplex
that present storage out to hosts. Go to the Virtual Volumes tab and you’ll see the
volumes that are already presented out to the host. Add your virtual volume.
If you go back to Virtual Volumes in the Distributed Storage tab, you’ll see that the
service status is now ‘running’ instead of unexported. This also means that the host
can now see the newly created device.
Page 28
STORING THE FUTURE
Consider pausing data migration during critical hours of production and resuming it
during off-peak hours.
The default transfer size value is 2 MB. It is configurable for 4 KB to 32MB. When the
transfer size is set large, migration will be faster but potentially could impact
performance on the front end. Smaller transfer size will result in less front-end impact
but migrations will take longer.
A batch can process either extents or devices, but not a mix of both.
Configure the metadata volumes for each cluster with multiple back-end storage
volumes provided by different storage arrays of the same type.
Use Infini-RAID for metadata volumes. The data protection capabilities provided by
these storage arrays ensure the integrity of the system's metadata.
Page 29
STORING THE FUTURE
Use Infini-RAID for logging volumes. The data protection capabilities provided by the
storage array ensure the integrity of the logging volumes.
Each VPLEX cluster should have sufficient logging volumes to support its distributed
devices. The logging volume must be large enough to contain one bit for every page of
distributed storage space. See EMC documentation.
For logging volumes the best practice is to mirror them across two or more back-end
arrays to eliminate the possibility of data loss on these volumes.
You can have more than one logging volume, and can select which logging volume is
used for which distributed device.
The logging devices can experience significant I/O bursts during and after link outages.
The best practice is to stripe each logging volume across many [TG1] disk for speed and
also to have a mirror on a separate back-end array.
Volumes that will be used for logging volumes must be initialized (have zeros written to
their entire LBA range) before they can be used.
Each storage view contains a list of host/initiator ports, VPLEX FE ports, and virtual
volumes. A one-to-one mapping of storage view and host is recommended.
Each storage view should contain a minimum of two director FE ports, one from an A
director and one from a B director.
A storage view should contain a recommended minimum of two host initiator ports.
Page 30
© Copyright INFINIDAT LTD 2015.
This document is current as of the date of and may be changed by INFINIDAT at any time. Not
all offerings are available in every country in which INFINIDAT operates.
The data discussed herein is presented as derived under specific operating conditions. Actual
results may vary. THE INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS” WITHOUT
ANY WARRANTY, EXPRESSED OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR
CONDITION OF NON-INFRINGEMENT. INFINIDAT products are warranted according to the
terms and conditions of the agreements under which they are provided.
INFINIDAT, The INFINIDAT logo, InfiniBox, InfiniRAID, InfiniSnap, InfiniMetrics, and any other
applicable product trademarks are registered trademarks or trademarks of INFINIDAT LTD in
the United States and other countries. Other product and service names might be trademarks
of INFINIDAT or other companies. A current list of INFINIDAT trademarks is available online at
http://www.infinidat.com/legal/trademarks/
Please Recycle