You are on page 1of 9

Conceptually and operationally, SRDF is designed to work in a WAN/Internet/Cloud/SAN environment with multiple Symms involved, while Timefinder is local

to a Symm, but performs the same functions. The difference, SRDF can be performed without Geographic boundaries, while Timefinder is local. The following are various different forms of SRDF that can be used by a customer to perform SRDF operations. Synchronous mode With Synchronous mode, the remote symm must have I/O in cache before the application receives the acknowledgement. Depending on distance where these Symmetrix machines are located, this may have a significant impact on performance. This form of SRDF is suggested to be implemented in a campus environment. If you want to ensure that the data is replicated real time without dirty tracks from one symmetrix to the other, you might want to enable Domino effect. With Domino effect, your R1 devices will become not ready if the R2 devices cant be reached. Semi-synchronous mode With Semi-synchronous mode, the I/O between the R1 and R2 devices are always out of sync. The application receives the acknowledgement from the first write I/O to the local cache. The second I/O isnt acknowledged until the first is in the remote cache. This form of SRDF is faster than the previous mentioned Synchronous mode.
Adaptive Copy-Write Pending

With Adaptive Copy-Write Pending, all the R2 volumes are copied over without the delay of acknowledgement from the application. With this mode, we can setup a skew parameter that will allow max number of dirty tracks. Once that number is reached, the system switches to a preconfigured mode like the semi-synchronous mode until the remote data is all synced. Once this is hit, SRDF is switched back to Adaptive Copy-Write Pending mode. The following are SRDF Commands and what they are used for. Composite SRDF commands 1. Failover: 1. Actions: 1. Write disables (WD) R1 2. Sets link to Not Ready (NR) 3. Write enables R2 2. Command: symrdf -g ${group} failover 2. Update: Helps to speed up the failback operation by copying invalid tracks before write disabling any disks. 1. Actions: 1. Leaves service state as is. 2. Merges the tracks

3. Copies invalid tracks 2. Command: symrdf -g ${group} update 3. Failback: 1. Actions: 1. Write disables R2 2. Suspends RDF link 3. Merges the disk tracks. 4. Resumes the link 5. Write enables R1 6. Copies the changed data 2. Command: symrdf -g ${group} failback 4. Split: Leaves both R1 & R2 in write enabled state. 1. Actions: 1. Suspends the rdf link. 2. Write enables R2 2. Command: symrdf -g ${group} split 5. Establish: 1. Actions: 1. Write disables R2 2. Suspends the rdf link 3. Copies data from R1 to R2 4. Resumes the rdf link. 2. Command: symrdf -g ${group} [ -full ] establish 6. Restore: Copies data from R2 to R1 1. Actions: 1. Write disables both R1 & R2 2. Suspends the rdf link. 3. Merges the track tables 4. Resumes the rdf link. 5. Write enables R1 2. Command: symrdf -g ${group} [ -full ] restore Singular SRDF commands 1. Suspend: symrdf -g ${group} suspend 2. Resume: symrdf -g ${group} resume 3. Set mode: symrdf -g ${group} set mode sync symrdf -g ${group} set domino on symrdf -g ${group} set acp_disk skew 1000

TO SUBSCRIBE TO STORAGENERVE BLOG This blog talks about setting up basic SRDF related functionality on the Symmetrix / DMX machines using EMC Solutions Enabler Symcli. For this setup, lets have two different host, our local host will be R1 (Source) volumes and our remote host will be R2 (Target) volumes. A mix of R1 and R2 volumes can reside on the same symmetrix, in short you can configure SRDF between two Symmetrix machines to act as if one was local and other was remote and vice versa. Step 1 Create SYMCLI Device Groups. Each group can have one or more Symmetrix devices specified in it. SYMCLI device group information (name of the group, type, members, and any associations) are maintained in the SYMAPI database. In the following we will create a device group that includes two SRDF volumes. SRDF operations can be performed from the local host that has access to the source volumes or the remote host that has access to the target volumes. Therefore, both hosts should have device groups defined. Complete the following steps on both the local and remote hosts. a) Identify the SRDF source and target volumes available to your assigned hosts. Execute the following commands on both the local and remote hosts. # symrdf list pd (execute on both local and remote hosts) or # syminq b) To view all the RDF volumes configured in the Symmetrix use the following # symrdf list dev c) Display a synopsis of the symdg command and reference it in the following steps. # symdg h d) List all device groups that are currently defined. # symdg list e) On the local host, create a device group of the type of RDF1. On the remote host, create a device group of the type RDF2. # symdg type RDF1 create newsrcdg (on local host) # symdg type RDF2 create newtgtdg (on remote host) f) Verify that your device group was added to the SYMAPI database on both the local and remote

hosts. # symdg list g) Add your two devices to your device group using the symld command. Again use (h) for a synopsis of the command syntax. On local host: # symld h # symld g newsrcdg add dev ### or # symld g newsrcdg add pd Physicaldrive# On remote host: # symld g newtgtdg add dev ### or # symld g newtgtdg add pd Physicaldrive# h) Using the syminq command, identify the gatekeeper devices. Determine if it is currently defined in the SYMAPI database, if not, define it, and associate it with your device group. On local host: # syminq # symgate list (Check SYMAPI) # symgate define pd Physicaldrive# (to define) # symgate -g newsrcdg associate pd Physicaldrive# (to associate) On remote host: # syminq # symgate list (Check SYMAPI) # symgate define pd Physicaldrive# (to define) # symgate -g newtgtdg associate pd Physicaldrive# (to associate) i) Display your device groups. The output is verbose so pipe it to more. On local host: # symdg show newsrcdg |more On remote host: # symdg show newtgtdg | more j) Display a synopsis of the symld command. # symld -h k) Rename DEV001 to NEWVOL1 On local host:

# symld g newsrcdg rename DEV001 NEWVOL1 On remote host: # symld g newtgtdg rename DEV001 NEWVOL1 l) Display the device group on both the local and remote hosts. On local host: # symdg show newsrcdg |more On remote host: # symdg show newtgtdg | more Step 2 Use the SYMCLI to display the status of the SRDF volumes in your device group. a) If on the local host, check the status of your SRDF volumes using the following command: # symrdf -g newsrcdg query Step 3 Set the default device group. You can use the Environmental Variables option. # set SYMCLI_DG=newsrcdg (on the local host) # set SYMCLI_DG=newtgtdg (on the remote host) a) Check the SYMCLI environment. # symcli def (on both the local and remote hosts) b) Test to see if the SYMCLI_DG environment variable is working properly by performing a query without specifying the device group. # symrdf query (on both the local and remote hosts) Step 4 Changing Operational mode. The operational mode for a device or group of devices can be set dynamically with the symrdf set mode command. a) On the local host, change the mode of operation for one of your SRDF volumes to enable semisynchronous operations. Verify results and change back to synchronous mode. # symrdf set mode semi NEWVOL1 # symrdf query # symrdf set mode sync NEWVOL1 # symrdf query b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device group. Verify that the mode change occurred and then disable adaptive copy. # symrdf set mode acp disk # symrdf query

# symrdf set mode acp off # symrdf query Step 5 Check the communications link between the local and remote Symmetrix. a) From the local host, verify that the remote Symmetrix is alive. If the host is attached to multiple Symmetrix, you may have to specify the Symmetrix Serial Number (SSN) through the sid option. # symrdf ping [ -sid xx ] (xx=last two digits of the remote SSN) b) From the local host, display the status of the Remote Link Directors. # symcfg RA all list c) From the local host, display the activity on the Remote Link Directors. # symstat -RA all i 10 c 2 Step 6 Create a partition on each disk, format the partition and assign a filesystem to the partition. Add data on the R1 volumes defined in the newsrcdg device group. Step 7 Suspend RDF Link and add data to filesystem. In this step we will suspend the SRDF link, add data to the filesystem and check for invalid tracks. a) Check that the R1 and R2 volumes are fully synchronized. # symrdf query b) Suspend the link between the source and target volumes. # symrdf suspend c) Check link status. # symrdf query d) Add data to the filesystems. e) Check for invalid tracks using the following command: # symrdf query f) Invalid tracks can also be displayed using the symdev show command. Execute the following command on one of the devices in your device group. Look at the Mirror set information. On the local host: # symdev show ### g) From the local host, resume the link and monitor invalid tracks. # symrdf resume # symrdf query

In the next upcoming blogs, we will setup some flags for SRDF and Director types, etc. Happy SRDFing!!!!!
What is SRDF? It is Symmetrix Remote Data Facility (SRDF), EMC SRDF family of replication software offers various levels of EMC Symmetrix based business continuance and disaster recovery solutions. The SRDF products offer the capability to maintain multiple, host-independent, copies of data. The EMC Symmetrix systems can be in the same room, in different buildings within the same campus, or hundreds to thousands of kilometers apart. By maintaining copies of data in different physical locations, SRDF enables you to perform the following operations with minimal impact on normal business processing: Disaster restart

Disaster restart testing Recovery from planned outages Remote backup Data center migration Data replication and mobility

The SRDF family consists of three base solutions: SRDF/Synchronous (SRDF/S) High-performance, host-independent, real-time synchronous remote replication from one Symmetrix to one or more Symmetrix systems. SRDF/Asynchronous (SRDF/A ) High-performance extended distance asynchronous replication using a Delta Set architecture for optimal bandwidth utilization and minimal host performance impact.

SRDF/Data Mobility (SRDF/DM ) Rapid transfer of data from source volumes to remote volumes
anywhere in the world, permitting information to be shared and content to be distributed, or information consolidated for parallel processing activities.

SRDF functional overview


EMC Symmetrix systems use local mirroring, or RAID 1, as one method of protecting data by maintaining data on both a production volume and a mirror volume within the same storage unit. SRDF uses a method of data protection known as remote mirroring. Remote mirroring is similar to local mirroring, except that the primary volume resides in one storage unit while its remote mirrors (up to two remote mirrors are supported with Concurrent SRDF) reside in a different storage unit. When the main storage systems are down for a planned or unplanned event, SRDF enables fast switchover from the primary data to the secondary copy. The local SRDF device, known as the primary (source, R1) device, is configured in a pairing relationship with a secondary (target, R2) device, forming an SRDF pair. While the R2 device is mirrored with the R1 device, the R2 device is either write disabled or not ready to its host. (Not ready means disabled for both reads and writes.) After the R2 device becomes synchronized with its R1 device, you can halt the remote mirroring to the R2 device from the R1 device at any time, making the R2 device fully accessible again to its host. After the remote mirroring is halted, the secondary (target, R2) device contains valid data and is available for performing business continuance tasks through its original host connection or restoring (copying) data to the primary (source, R1) device if there is a loss of data on that device. Figure 1 Basic EMC SRDF configuration

Symmetrix V-Max Systems: SRDF Enhancements and Performance So this was one of those posts that I always wanted to write related to Symmetrix V-Max and SRDF enhancements that were incorporated with the 5874 microcode.

Yesterday morning had a chat with a friend and ended up talking about SRDF and then later in the day had another interesting conference call on SRDF with a potential customer. So I really thought, today was the day I should go ahead and finish this post. Back in April 2009 when the V-Max systems were initially launched, Storagezilla had a post on V-Max and SRDF features, he covers quite a bit of ground related to the Groups and the SRDF/EDP (Extended Distance Protection). Here are the highlights of SRDF for V-Max Systems SRDF Groups: 1. 250 SRDF Groups with Symmetrix V-Max (5874) Systems. In the prior generation Symmetrix DMX-4 (5773), it had support for 128 groups. Logically even with 2PB of storage, very seldom do customers hit that mark of 250 groups. 2. 64 SRDF groups per FC / GigE channel. In the previous generation Symmetrix DMX-4 (5773), there was support for 32 groups per channel. SRDF Consistency support with 2 mirrors: 1. Each leg is placed in a separate consistency group so it can be changed separately without affecting the other. Active SRDF Sessions and addition/removal of devices: 1. Now customers can add or remove devices from a group without invaliding the entire group, upon the device becoming fully synced it should be added to the consistency group (with previous generation Symmetrix DMX-4, one device add or remove would cause the entire group to invalidate requiring the customers to run full establish again). SRDF Invalid Tracks: 1. The long tail last few tracks search has been vastly improved. The search procedure and methods for the long tail has been completely redesigned. It is a known fact with SRDF, that the last invalid tracks take a lot of time to sync as its going through the cache search. 2. The SRDF establish operations speed is at least improved by 10X; see the numbers below in the performance data. Timefinder/Clone & SRDF restores: 1. Customers can now restore Clones to R2 and R2s to R1s simultaneously, initially with the DMX-4s this was a 3-step process. SRDF /EDP (Extended Distance Protection): 1. 3-way SRDF for long distance with secondary site as a pass through site using Cascaded SRDF. 2. For Primary to Secondary sites customers can use SRDF/S, for Secondary to Tertiary sites customer can use SRDF/A 3. Diskless R21 pass-through device, where the data does not get stored on the drives or consume disk. R21 is really in cache so the host is not able to access it. Needs more cache based on the amount of data transferred. 4. R1 S > R21 A > R2 (Production site > Pass-thru Site > Out-of-region Site) 5. Primary (R1) sites can have DMX-3 or DMX-4 or V-Max systems, Tertiary (R2) sites can have

DMX-3 or DMX-4 or V-Max systems, while the Secondary (R21) sites needs to have a V-Max system. R22 Dual Secondary Devices: 1. R22 devices can act as target devices for 2 x R1 devices 2. One Source device can perform Read write on R22 devices 3. RTO improved with primary site going down Other Enhancements: 1. Dynamic Cache Partitioning enhancements 2. QoS for SRDF/S 3. Concurrent writes 4. Linear Scaling of I/O 5. Response times equivalent across groups 6. Virtual Provisioning supported with SRDF 7. SRDF supports linking Virtual Provisioned device to another Virtual Provisioned device. 8. Much more faster dynamic SRDF operations 9. Much more faster failover and failback operations 10. Much more faster SRDF syncs Some very limited V-Max Performance Stats related to SRDF: 1. 2. 3. 4. 5. 6. 7. 8. 36% improved FC performance FC I/O per channel up to 5000 IOPS GigE I/O per channel up to 4000 IOPS 260 MB/sec RA channel I/O rate, with DMX-4 it was 190 MB/seconds 90 MB/sec GigE channel I/O rate, with DMX-4 it was almost the same 36% improvement on SRDF Copy over FC New SRDF pairs can be created in 7 secs compared to 55 secs with previous generations Incremental establishes after splits happen in 3 seconds compared to 6 secs with previous generations 9. Full SRDF establishes happen in 4 seconds compared to 55 seconds with previous generations 10. Failback SRDF happen in 19 seconds compared to 47 seconds with previous generations

You might also like