Professional Documents
Culture Documents
Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate as o f its publication date. The information is subject
to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks (collectively "Trademarks")
appearing in this publication are the property of EMC Corporation and other parties. Nothing contained in this publication sh ould be construed as granting any license or right to use any
Trademark without the prior written permission of the party that owns the Trademark.
EMC, EMC² AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, Avamar, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect,
ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration
Intelligence, Configuresoft, Connectrix, Constellation Computing, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify, DD
Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences, Documentum, DR Anywhere, ECS, elnput, E-Lab,
Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE,
FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel
Express, Invista, Ionix, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Multi-Band
Deduplication,Navisphere, Netstorage, NetWorker, nLayers, EMC OnCourse, OnAlert, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere,
ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN
Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI,
SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, Velocity, Viewlets, ViPR,
Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O
PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.
Note: Snapshots are not full copies of the original data. It is recommended that you do not
rely on snapshots for mirrors, disaster recovery, or high-availability tools. Because
snapshots of storage resources are partially derived from the real -time data in the relevant
storage resource, snapshots can become inaccessible (not readable) if the primary storage
becomes inaccessible.
If the snapshot is writable, any writes are handled in a similar manner; stripe space is
allocated from the parent pool and the writes are redirected in 8 kilobyte chunks to the new
space. Reads of newly written data are also services from the new space.
Storage space is needed in the pool to support snapshots as stripes are allocated for
redirected writes. Because of the on-demand stripe allocation from the pool, snapped thick
file systems will transition to thin file system performance characteristics.
It is also possible to copy a snapshot. In this example, the 4 o’clock snapshot is copied and
other than having a unique name, the copy will be indistinguishable from the source
snapshot and both capture identical data states.
LUN snapshots and copies of LUN snapshots are writable objects but are unmodified from
their sources until they are attached to a host. In this example a copy of the 4 o’clock
snapshot is attached to a secondary host for access. At this point the snapshot data state is
marked as modified from its source. A LUN snapshot tree supports one snapshot attach at a
time. To access different snapshots in the tree the host would have to be detached from the
snap and attached to another desired snapshot in the tree.
Any subsequent copy of the attached snapshot will capture the data state of the source
snapshot at the point-in-time of the copy operation.
Snapshots of a file system can be created either as read-only or read/write and are
accessed in different manners which will be covered later.
Copies of snapshots are always created as read/write snapshots. The read/write snapshots
can be shared by creating an NFS export or SMB share to them. When shared, they are
marked as modified to indicate their data state is different from the parent object.
It is also possible to nest copied and shared snapshots that form a hierarchy of snapshots
to a maximum of 10 levels deep.
For Block, the snapshot is created on a LUN or a group of LUNs in the case of a Consistency
Group. For File, the snapshot is configured on a file system. For VMware the storage
resource is either going to be a LUN for a VMFS datastore or a file system for an NFS
datastore. When creating each of these storage resources, the Unity system provides a
wizard for their creation. Each wizard provides an option to automatically create snapshots
on the storage resource. Each resource snapshot creation is nearly identical to the other
resources.
For storage resources already created, snapshots can be manually created for them from
their Properties page. As with the wizard, the snapshot creation from the storage resource
Properties page is nearly identical to the other resources. The following few slides will show
snapshot creation within the Block storage LUN creation wizard and the File storage file
system creation wizard. It will also show creating manual snapshots from the LUN and file
system Properties pages.
Video demonstrations will be provided showing all forms of storage resource snapshot
creation.
The wizard contains a dropdown selection that has three different system defined
schedules to select from to create the LUN snapshots. Within each schedule there is also a
snapshot retention value.
A customized schedule can also be created for use. The scheduler provides the ability to
configure a snapshot frequency by the hour, day or week. A snapshot retention policy can
also be defined.
Note: fields annotated with a red asterisk are required for the configuration.
Select the + icon to create a snapshot of the LUN. The snapshot must be configured with a
name; by default the system provides a name having a year, month, day, hour, minute,
second format. Customized names can also be configured. A Description field for the
snapshot can optionally be annotated. One of three Retention Policies must be configured.
The default retention configuration is the Pool Automatic Deletion Policy which will
automatically delete the snapshot if Pool space reaches a specified capacity threshold. A
customized retention time can alternately be selected and configured for snapshot deletion
on a specified calendar day and time. The other alternative is to select the No Automatic
Deletion option should the snapshot need to be kept for an undetermined amount of time.
The wizard contains a dropdown selection that has three different system defined
schedules to select from to create the file system snapshots. Within each schedule there is
also a snapshot retention value.
A customized schedule can also be created for use. The scheduler provides the ability to
configure a snapshot frequency by the hour, day or week. A snapshot retention policy can
also be defined.
As noted before, fields annotated with a red asterisk are required for the configuration.
Select the + icon to create a snapshot of the file system. The snapshot must be configured
with a name; the system by default provides a name based on the creation time in a year,
month, day, hour, minute, second format. Customized names can also be configured. An
optional Description field for the snapshot can be annotated. One of three Retention Policies
must be configured. The default retention configuration is the Pool Automatic Deletion
Policy which will automatically delete the snapshot if Pool space reaches a specified capacity
threshold. A customized retention time can alternately be selected and configured for
snapshot deletion on a specified calendar day and time within a year of creation. The other
alternative is to select the No Automatic Deletion option should the snapshot need to be
kept for an undetermined amount of time.
The Access Type section requires configuration by selecting one of the two options for the
snapshot to be created read-only or read/write.
Additional user defined schedules can be created as shown here. This example shows a
schedule that will create weekend snapshots that will be created at midnight and retained
for two weeks.
Before performing a restore operation, detach hosts attached to any of the LUN snapshots.
Also ensure that all hosts have completed all read and write operations to the LUN you want
to restore. Finally disconnect any host accessing the LUN. This may require disabling the
host connection on the host-side.
Now the Restore operation can be performed. From the 4 o’clock snapshot select the
Restore operation. The system will automatically create a snapshot of the current 5 o’clock
data state of the LUN to capture its current data state before the restoration operation
begins. The LUN is restored to the 4 o’clock data state of the snapshot.
The hosts can now be reconnected to the resources they were connected to prior to the
restore and resume normal operations.
Before performing an Attach operation, hosts must be configured for snapshot access to
the LUN. There are two possible settings: Snapshot access or LUN and Snapshot access.
The Snapshot access setting provides a secondary host read/write access to all the
snapshots associated with the LUN, but no access to the LUN itself. A host can access and
recover data from snapshots associated with the LUN but has no access to the data on the
LUN itself. The LUN and Snapshot access setting provides a primary host read/write access
to both the LUN itself and all of the snapshots associated with the storage resource. A host
can use the LUN data and can recover data from snapshots associated with the LUN.
Now the Attach operation can be performed. From the 3 o’clock snapshot, select the Attach
to host operation. The system will optionally create a copy of the snapshot to preserve its
data state before the attach operation begins. The secondary host is attached to the 3
o’clock snapshot of the LUN.
The Attach to host operation is accessed from the Properties page of the storage resource.
Select its Snapshots tab then select the snapshot to attach to. The Attach to host operation
is accessed from the More Actions dropdown list. The system will optionally create a copy
of the snapshot to preserve its integrity.
Before performing a Detach operation, allow any outstanding read/write operations of the
snapshot attached host to complete.
Now the Detach operation can be performed. From the 3 o’clock snapshot, select the
Detach from host operation. The secondary host is detached from the 3 o’clock snapshot of
the LUN.
Before performing a restore operation, disconnect clients from any of the file system
snapshots. Also quiesce IO to the file system being restored.
Now the Restore operation can be performed. From the 4 o’clock snapshot select the
Restore operation. The system will automatically create a snapshot of the current 5 o’clock
data state of the file system to capture its current data state before the restoration
operation begins. The file system is restored to the 4 o’clock data state of the snapshot.
The connections and IO to the resources can now be resumed for normal operations.
On the storage system the host will have to be configured to access snapshots of the LUN.
This task is done on the LUN Properties page from the Access tab. A host has to have
connectivity to the storage, either via fibre channel or iSCSI, and be registered. Next, from
the Snapshots tab, a snapshot is selected and the snapshot operation Attach to host needs
to be performed.
Now tasks from the host will need to be done. The host will need to discover the disk
device that the snapshot presents to it. Once discovered, then the host can access the
snapshot as a disk device.
On the storage system an NFS and/or SMB share will have to be configured on the
read/write snapshot of the file system. This task is done from their respective pages.
Now tasks from the client will need to be done. The client will need to be connected to the
NFS/SMB share of the snapshot. Once connected, then the client can access the snapshot
shared resource.
A short video follows that demonstrates client access to a file system snapshot.
The first task for an NFS client is to connect to an NFS share on the file system. Access to
the read-only snapshot is established by accessing the snapshot’s hidden .ckpt data path.
This path will redirect the client to the point-in-time view that the read-only snapshot
captures.
Similarly, the first task for an SMB client is to connect to an SMB share on the file system.
Access to the read-only snapshot is established by the SMB client accessing the SMB
share’s Previous Versions tab. This will redirect the client to the point-in-time view that the
read-only snapshot captures.
Because the read-only snapshot is exposed to the clients through the CVFS mechanism,
the clients are able to directly recover data from the snapshot without any administrator
intervention. For example, if a user either corrupted or deleted a file by mistake, that user
could directly access the read-only snapshot and get an earlier version of the file from the
snapshot and copy it to the file system to recover from.
A short video follows that demonstrates client access to a file system read-only snapshot.