Professional Documents
Culture Documents
Click the Notes tab to view text that corresponds to the audio recording.
Click the Resources tab to download a PDF version of this eLearning.
Copyright 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 EMC Corporation. All
Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES
OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC
SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos,
Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution,
C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook
Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz,
DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab,
EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File
Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express,
Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink,
PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor,
SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler,
Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual
Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression,
xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the
United States and other countries.
All other trademarks used herein are the property of their respective owners.
Copyright 2013 EMC Corporation. All rights reserved. Published in the USA.
Revision Date: 10/2014
Revision Number: MR-1WP-VNXRPSFD.5.33.x
The subject, target audience and objectives of this training are explained in this slide.
This course covers the VNX Remote Protection Suite solutions. It includes an overview
of VNX MirrorView, VNX Replicator, and RecoverPoint/SE Remote Protection
architecture, principles, features, and functionality.
Here we see all of the software suites available for the VNX series array. These various
suites each contain a unique set of solutions to improve efficiency by simplifying and
automating many storage tasks.
This training will focus on the Remote Protection Suite. The Remote Protection Suite is
used for protecting and repurposing data by creating remote file and block replicas.
The Remote Protection Suite is part of the Total Protection Pack and the Total
Efficiency Pack and offers protection against localized failures, outages, and disasters.
It delivers disaster recovery protection for any host and any application on blockbased or file-based storage without compromise and with immediate DVR-like
rollback to a point-in-time. Capabilities include compression and deduplication for WAN
bandwidth reduction and application-specific recovery point objectives. Replication
options are available for many to one fan-in replication for centralized backup
operations, as well as one-to-many disaster recovery configurations. The Remote
Protection Suite includes VNX MirrorView Asynchronous and MirrorView/Synchronous,
VNX Replicator and RecoverPoint/SE Remote Protection.
This module focuses on the EMC VNX MirrorView block-based replication features.
During this module we will discuss the business uses for the MirrorView Synchronous
and MirrorView Asynchronous product and its architecture, features and functionality.
This module will also discuss the VNX MirrorView interoperability with peripheral
products.
This lesson covers the VNX MirrorView key terminology, operating principles and
configurations. This lesson will also describe the VNX MirrorView in business
environments.
EMC VNX MirrorView software offers two storage system-based remote mirroring
products: MirrorView/Synchronous and MirrorView/Asynchronous. These solutions
provide end-to-end data protection by replicating the contents of a primary volume to
a secondary volume that resides on a different VNX storage system. MirrorView/S
offers a zero data loss option, while MirrorView/A offers an alternative when minutes
of data loss may be tolerable.
MirrorView common terms and its definitions are listed on this table. Please take a
moment to pause the training and read them.
This slide details the requirements and considerations for configuring MirrorView on a
VNX array. MirrorView allows for a large variety of topologies and configurations. In all
configurations, the primary and secondary images must have the same server-visible
capacity (user capacity), because they are allowed to reverse roles for failover and
failback. But do not need to have the same RAID configuration. In order to use
MirrorView, the software needs to be loaded on both arrays. Secondary LUNs will not
be accessible to hosts during the mirroring. Bi-directional mirroring is fully supported
as long as the primary and secondary images within any mirror reside on different
storage systems.
MirrorView can also be used to consolidate or fan-in information on one remote VNX
Array for consolidated backups, simplified failover, or consolidated remote processing
activities.
The Administrator can mirror up to four source VNX Arrays to a single VNX Array
target system. The source systems and target systems can be in any location.
The 4 to 1 ratio is also applicable to both MirrorView/S and MirrorView/A.
Fan-out mirroring may be used to replicate data from one primary LUN to up to two
secondary LUNs residing on different arrays.
Shown here is an example of MirrorView/S fan-out functionality. This enables the
administrator to synchronously mirror one primary image to two different secondary
images or in the case of MirrorView/A, one primary image to a single secondary
image.
The secondary images are managed independently but must reside on separate VNX
Array storage systems. No secondary image can reside on the same storage system as
a primary image of the same mirror.
MirrorView use dedicated ports on supported storage systems - the specific port used
on each Storage Processor depends on the VNX model, and whether FC or iSCSI is
being used for replication. All MirrorView traffic goes through one port of each
connection type (FC and/or iSCSI) per Storage Processor. For systems that have only
FC ports, MirrorView traffic goes through one FC port of each Storage Processor. For
FC/iSCSI systems, one FC port and one iSCSI port are available for MirrorView traffic.
A path must exist between the MirrorView ports of Storage Processor A of the primary
and Storage Processor A of the secondary system. The same relationship must be
established for Storage Processor B. The MirrorView port may be shared with host I/O
(though this may be undesirable in some environments).
Once an image has been mirrored, the image may be in one of three availability
states. The inactive mirrored state means that the Administrator has intentionally
stopped mirroring. An active status is considered a normal state where all I/Os are
allowed on the image. An attention state indicates that something has happened to
the mirrored image and action by an Administrator is required.
In terms of the mirrored image consistency and relationship with the source image,
MirrorView contains five data states. Out-of-sync means that a full sync is in order.
The In-sync and synchronizing states indicate that the primary and secondary
contains identical data and the operation is in process. A consistent state means that
the mirroring has been stopped and a write intent or fracture log is needed to continue
the mirroring. Rolling back is the act of returning a primary to a predefined point-intime.
A fracture stops MirrorView replication from the primary image to the secondary
mirror image. Administrative fractures are initiated by the user typically to suspend
replication, as opposed to a system fracture which is initiated by the MirrorView
software. A system fracture entitles a communication failure between the primary and
secondary systems.
With MirrorView/S, writes continue to the primary image but are not replicated to the
secondary during a fracture. Replication can resume when the user issues a
synchronize command.
With MirrorView/A, the current updates stop during a fracture, and no further updates
start until a synchronize request is issued. The last consistent copy remains in place
on the secondary image if the mirror is to be updated.
Support for Virtual Provisioning with MirrorView enables remote replication protection
of virtually provisioned LUNs. When mirroring a thin LUN to another thin LUN, only
consumed capacity is replicated between the storage systems. When mirroring from a
thin LUN to a classic or thick LUN, the thin LUNs host visible capacity must be equal to
the classic LUNs capacity or the thick LUNs user capacity.
MirrorView/S checks the space available to the secondary before adding a secondary
image to a mirror and before starting synchronization on an existing mirror; however
it is possible for other thin LUN activity in the same pool to consume pool capacity
while the synchronization is in progress. MirrorView/A does NOT have any checks to
consider Pool space on secondary array. If there is not enough space, it is treated like
a failed write and an administrative fracture occurs.
The mirroring in MirrorView/S is synchronous, meaning that every time a host writes
to the primary array, the secondary array mirrors the write before an
acknowledgement is returned to the host. MirrorView ensures that there is an exact
byte-for-byte copy at both the local VNX array and the remote VNX array. Since
MirrorView/S is storage-based software, no host CPU cycles are used. MirrorView/S
operates in the background, transparent to any hosts or applications. MirrorView/S is
fully integrated with snapshots, the array-based software that creates consistent
point-in-time copies. Those copies allow access to local or remote data. MirrorView/S
is managed from within Unisphere Management software. Remote replication is
supported for both Fibre Channel and iSCSI connections.
This slide shows the steps involved in a VNX Synchronous Mirroring operation. Once a
write I/O is received from the client into the primary array, the write I/O is also
transmitted to the secondary array for mirroring. Once the write I/O is cached by the
secondary array, an acknowledgement is sent to the primary array, and another
acknowledgement is sent back to the host. The host needs to wait for the
acknowledgement before sending more write I/Os to the primary array.
The fracture log is a bitmap held in the memory of the storage processor that owns
the primary image. The log indicates which physical areas of the primary have been
updated since communication was interrupted with the secondary image. The fracture
log is automatically invoked when communication with the secondary image of a
mirror is lost for any reason and the mirror becomes fractured. It could be initiated by
the administrator or system. It tracks changes on the primary image for as long as the
secondary image is unreachable. When the secondary LUN returns to service, the
secondary image must be synchronized with the primary. This is done by reading
those areas of the primary addressed by the fracture log and writing them to the
secondary image. This occurs in parallel with any writes coming into the primary and
mirrored to the secondary.
The Write Intent Log keeps track of writes that have not yet been made to the remote
image for the mirror. A record of recent changes to the primary image is stored in
persistent memory on a private LUN reserved for the mirroring software. If the
primary storage system fails, the optional write intent log can be used to quickly
synchronize the secondary image, when the primary storage system becomes
available. This eliminates the need for full synchronization of the secondary images,
which can be a lengthy process on very large LUNs.
MirrorView/A uses SnapView change tracking technology to create Delta Sets for
incremental updates. These updates are tracked at 2 KB granularity. It replicates only
the changed blocks during the MirrorView/A update cycle and transfers the changed
blocks at 2-64 KB granularity. Provisioning of adequate space in the Reserved LUN
Pool on the primary and secondary storage systems is vital to the successful operation
of MirrorView/A. The exact amount of space needed may be determined in the same
manner as the required space for SnapView is calculated. The Delta Set is sent to
secondary periodically and the customer defines the update frequency. The secondary
array is automatically protected via a point-in-time gold copy during the update cycle.
This ensures a consistent, usable remote copy and protection against partial-update
scenarios.
This module covered the VNX MirrorView data replication solution, operating
principles, terms, functionality and capabilities.
This module focuses on the EMC VNX Replicator file-based replication features. During
this module, we will discuss the business uses for the VNX Replicator product and its
architecture, features and functionality. This module will also discuss the VNX
Replicator interoperability with a peripheral product.
This lesson covers the VNX Replicator architecture, theory of operation and
terminology. This lesson will also cover the VNX Replicator in business environments.
VNX Replicator is an IP-based replication solution that produces a read-only, point-intime copy of a source or production system. The VNX Replication service periodically
updates this copy, making it consistent with the production file system. Replicator
uses internal checkpoints to ensure availability of the most recent point-in-time copy.
These internal checkpoints are based on VNX SnapSure technology. This read-only
replica can be used by an Data Mover X-Blade in the same VNX cabinet or an Data
Mover X-Blade at a remote site for content distribution, backup, and application
testing. When a replication session is first started, a full backup is performed. After
initial synchronization, Replicator only sends changed data over IP.
This table shows the definition of Replicator terms. Please take a moment to review
them.
The Data Mover interconnect defines the communications path between a given pair of
Data Mover X-Blades located on the same cabinet or different cabinets. Before
creating a replication session, both sides of a Data Mover interconnect must be
established to ensure communication between the Data Mover X-Blade pair that
represents the source and destination. Interconnect for remote replication is created
between a local Data Mover X-Blade and a remote Data Mover X-Blade on another
system. The Administrator must first create the interconnect on the local side and
then on the destination side, before successfully creating a remote replication session.
Each Data Mover X-Blade, by default, has a loopback interconnect which cannot be
removed or modified. The loopback interconnect is used for loopback replication
sessions. Only one interconnect can be established between a given Data Mover XBlade pair. Each physical link can have multiple IP addresses. An interconnect cannot
be deleted if it is used by a replication or copy session.
Replicator uses internal checkpoints for file system replication. These checkpoints are
point-in-time views of a file system that are not a copy or a mirror image of the
original file system. Rather, the Checkpoint file system, is a logical presentation of
what the production file system looked like at a particular time. The checkpoint is a
read-only view of the production file system prior to changes at that particular time.
Each live file system with a checkpoint has an associated save volume, or SavVol. The
first change made to each file system data block following a checkpoint triggers the
software to copy that data block to the SavVol. It also holds the changes made to a
writeable checkpoint and Replicator checkpoints.
Loopback replication of a source object occurs on the same Data Mover X-Blade in the
cabinet. In other words, the source Data Mover X-Blade and destination Data Mover XBlade are the same. Communication is established by using the Data Mover X-Blade
loopback interconnect, established automatically for each Data Mover X-Blade in the
cabinet. The internal loopback of 127.0.0.1 is used in communicating control signals
back and forth. Since an internal IP address is used, theres no need to involve the
network stack for data transfer.
Local replication occurs between two Data Mover X-Blades in the same VNX cabinet.
Both Data Mover X-Blades must be configured to communicate with one another by
using a Data Mover interconnect. After communication is established, a local
replication session can be set up to produce a read-only copy of the source object for
use by a different Data Mover X-Blade in the same VNX cabinet. The source and
destination file systems are stored on separate volumes.
Remote replication occurs between a local Data Mover X-Blade and a Data Mover XBlade on a remote VNX. Both VNXs must be configured to communicate with one
another by using a common passphrase, and a Data Mover interconnect. After
communication is established, a remote replication session can be set up to create and
periodically update a source object at a remote destination site. The initial copy of the
source file system can either be done over an IP network or by using the tape
transport method. After the initial copy, replication transfers changes made to the
local source object to a remote destination object over the IP network. These transfers
are automatic and are based on definable replication session properties and update
policy.
After the initial full copy has been completed, the destination object now contains data
from the source object at the time that the Checkpoint was originally taken. The
amount of time taken to complete the initial full copy depends on the bandwidth of the
Network and the size of the source object. In this example, it took 30 minutes to
transfer the initial full copy. Once the full copy has been completed, Replicator
refreshes the destination objects first checkpoint so it can contain the same view as
the first checkpoint from the source object. Now the common base for both source and
destination is Checkpoint 1.
Once a certain amount of time has passed, depending on the update policy, Replicator
needs to transfer over any changes that were made on the source since the last
checkpoint was taken. To do this, Replicator refreshes Checkpoint 2 of the production
data and compares this checkpoint to the common base Checkpoint 1. Any changes
are then sent over to the destination. Once the transfer is complete, the second
Checkpoint on the destination is refreshed. Now, Checkpoint 2 on both source and
destination will be the new common base. This cycle continues until the replication
session is stopped or deleted.
VNX Replicator can be managed by Secure CLI or Unisphere with the VNX Replicator
license enabled.
This lesson covers the VNX Replicator features and functionality. The disaster recovery
options, policies and configuration will be discussed. This lesson will also describe the
interoperability between the VNX Replicator and a peripheral product.
A replication failover operation sets the replicated objects on the destination VNX to
Read/Write so that data access can resume. The failover operation is performed only
on the destination VNX. The execution of the failover operation is asynchronous and
results in data loss if all the data is not transferred to the destination site prior to
issuing the failover. The failover process starts when the source side of replication
becomes unavailable and the normal replication process stops. This could be due to a
disaster at the source side of replication, a power-outage or a network outage. Next
the replication failover command is issued to the destination VNX. The replicated
objects on the destination VNX are changed from being mounted read-only to
read/write. If the source VNX becomes available its replicated objects is mounted
read-only. Data access resumes from the destination side VNX. Its data is consistent
with the last successful data transfer from the source.
The Administrator can perform a reverse operation from the source side without data
loss. This operation reverses the direction of the replication session thereby making
the destination read/write and the source read-only. First, the destination object is
synchronized with the source object. Next, the source object is mounted read-only and
replication is stopped. The destination object is mounted with read/write access. Then,
replication is started in the reverse direction from a differential copy and with the
same configuration parameters.
VNX Replicator includes support for failover and reverse, as well as Virtual Data Mover
(Or VDM) support. Combining these features provides an asynchronous data recovery
solution for CIFS servers and CIFS file systems mounted on a VDM. In a CIFS
environment, in order to successfully access file systems on a remote secondary site,
the Administrator must replicate the entire CIFS working environment including local
groups, user mapping information, Kerberos, DNS, shares and event logs. All that
information is copied over via the VDM. The Administrator must replicate the
production file systems attributes, access the file system through the same UNC path,
and find the previous CIFS servers attributes on the secondary file system. A CIFS
Data Recovery solution with VNX Replicator is possible if the CIFS servers are
configured on VDMs. All the CIFS information and configuration is replicated with that
VDM. Because of this, clients continue to access CIFS servers in the event of a failover
from the primary site to the secondary site.
Replicator has policies to control how often the destination object is refreshed by using
the max_time_out_of_sync setting. The Administrator can define a
max_time_out_of_sync value for a replication session or perform on-demand, manual
updates. The max_time_out_of_sync value represents the elapsed time window within
which the system attempts to keep the data on the destination synchronized with the
data on the source. The destination could be updated sooner than this value.
Replicator also allows the administrator to control throttle bandwidth by specifying
bandwidth limits on specific days, specific hours, or both. By default, Replicator uses
all available bandwidth unless a bandwidth schedule is configured.
This slide shows a basic cascade configuration for file system replication and also
using the incremental attach feature. The first session has two checkpoints for the
source and two for the destination object. For the second replication session, two
more checkpoints are created for the new source object, formerly the destination in
session one, and two more are created for the second destination object. The data on
the session two destination object can be quite different depending on the time-outof-sync values for both sessions.
To use the incremental attach feature, a set of checkpoints are manually created by
the administrator. First a checkpoint is created of the Source file system and the
Destination file system on the first leg of the cascade. The incremental attach option is
used to refresh the session. Next, a checkpoint of the final destinations file system is
manually created, and along with the first destinations checkpoint, is used to refresh
the replication session using the incremental attach option. This sets the checkpoints
as the common base for all the sessions. If the middle storage system is unavailable
due to disaster or hardware malfunction, the original source can directly replicate to
the final destination with just an incremental copy because of the common base that
was formed.
This module covered the VNX Replicator solution, its operation principles, terms and
functionality.
This lesson covers the RecoverPoint overview, key terminology, theory of operation
and volumes. This lesson will also discuss the RecoverPoint in business environments.
The right remote-replication solution can limit the exposure to planned and unplanned
downtime, enabling non-stop operation. Perhaps the Administrator also need to
provide the organization with efficient data replication to meet corporate or
governmental standards, while still meeting the total-cost-of-ownership requirements.
In addition, the Administrator need a flexible solution that changes as the needs
change.
No matter what the challenge is, there is one underlying theme: data protection and
faster business restart in the event of a disaster or unplanned outage are critical
across the organization.
Using replication software to maintain a complete copy of the data in a remote
location allows for business continuity and increased functionality. When disaster
recovery is required, remote replication software and remote clusters ensure that all
the mission-critical information has been captured. Using replicas rather than tape
means the production can be up and running in hours as opposed to days. In addition,
the second cluster can be used for application testing and remote backups. Replication
enables non-stop operation with full access to the production data and the replicas.
This table shows the definition of RecoverPoint terms. Please take a moment to review
them.
RecoverPoint uses software running on the CPUs of the arrays to perform WriteSplitting. This software copies all incoming writes for volumes protected with
RecoverPoint. A copy is written to the production volume and a copy is sent to the
RecoverPoint appliance. Previous versions of RecoverPoint used Write-Splitters located
on the Host or SAN. RecoverPoint 4.0 and above only uses the array-based version of
Write-Splitters.
The VNX splitter runs in each storage processor of a VNX and splits (mirrors) all
writes to a VNX volume, sending one copy to the original target and the other copy to
the RecoverPoint appliance. Both RecoverPoint and RecoverPoint/SE support the VNX
splitter. The VNX splitter is supported on both iSCSI and FC attached volumes.
Replication volumes or Replicas are the production storage volumes and their
matching target volumes which are used during replication.
Target volumes must be the same size (or larger) than the source volumes. Any
excess size (above the source sites size) will not be replicated or visible to the host.
This is an important design consideration for heterogeneous storage environments.
Journal volumes hold snapshots of data to be replicated. The type and amount of
information contained in the journal differs according to the journal type. Each Journal
volume holds as many point in time images as its capacity allows, after which the
oldest image is removed to make space for the newest. Journals consist of 1 or more
volumes presented to all the RPAs for the cluster. Space can be added, to allow a
longer history to be stored, without affecting replication. The size of a Journal volume
is based on several factors:
The change rate of the data being protected.
The amount of time between point in time images (could be as small as each write).
The number of point in time images that are kept.
This lesson covers the RecoverPoint/SE Remote Protection features, disaster recovery
and the Virtual Provisioning support.
Here is a view of a RecoverPoint failback operation. First, the production site recovers
from the disaster. The LUNs at the production site are changed to read-only and
designated as destinations so that no servers on the production site can access them.
Then, RecoverPoint uses compressed differential snapshots to replicate any changes
and sends these to the destination at the production site. Finally, after the sites are
re-synchronized, a failback reverses the sources and destinations back to their original
configuration.
Consistency groups define protection for a set of volumes. If two data sets are
dependent on one another (such as a database and a database log), they should be
part of the same consistency group. Consistency groups maintain write order between
the data sets, and Settings and policies for data protection. Examples of these
parameters are: compression, bandwidth limits, and maximum lag.
Imagine a motion picture film. The video frames are saved on one volume, the audio
on another. Neither volume will make sense without the other. The saves must be
coordinated so that they will always be consistent with one another. In other words,
the volumes must be replicated together in one consistency group to guarantee that at
any point in time, the saved data will represent a true state of the film. The
consistency group ensures that updates to the production volumes are also written to
the copies in consistent and correct write-order so the copy can always be used to
continue working from, or to restore the production source.
This module covered the RecoverPoint/SE Remote Protection data replication solution,
operation principles, terms and functionality.