You are on page 1of 4

Sun StorEdge 99xx: Using Shadow Image and SVM

Keyword(s):Shadow Image, SVM, Split Mirror Backup, StorEdge 99x0

Description:

This document describe the entire process of Backup & Restore using Shadow Image for
StorEdge[TM] 99x0 and Solaris[TM] Volume Manager. Though there are BluePrint documents
available, but these help only in providing a broad guideline on how a 'Split Mirror Backup' is
configured.

This document is intended for those already familliar with the RAID Manager CCI and/or Storage
Navigator interfaces for Shadow Image. The command line detail is provided for rebuilding the
Solaris Volume Manager metaset on the backup server.

Document Body:

Using Shadow Image for Backup with StorEdge 99x0 & Solaris Volume Manager

When doing a snapshot using Shadow Image on a logical volume, the entire content of the
physical disks is cloned. This includes the configuration section (called the private region in
VERITAS, or metadb in the Solaris[TM] Volume Manager (SVM), and the data section (also
called the public region). However, this private region (or metadb) holds disk identifications
parameters. Therefore, the cloned disks and original disks have the same ID. This is not a major
issue if the cloned data is to be accessed on two different hosts, but it can be a difficult issue to
solve if you want to access the cloned data on the same host.

Cloned Data On a Different Host

Accessing the replicated data on a secondary host is equivalent to importing the logical group (or
diskset) that contains the logical volumes you want to access. However, because the disks are
cloned, the volume manager on the secondary host will believe this diskset is already imported on
the primary host. This information is stored in the diskset metadatabases on the disks. It is
necessary to clean up this information on every cloned disk, making it possible to take ownership
of the diskset, and access its metadevices.

Definitions/Setup

Data Server: Sun[TM] Cluster 3.1 node using Solaris Logical Volume Manager
Backup Server: Some kind of backup software(Veritas Netbackup/Enterprise BackUp)
Storage: Sun StorEdge 9980

ShadowImage is used to clone the LUNs for the purpose of Backup. The data to access is
configured as a metadevice using SLVM patched at the latest level. In this example, the primary
and secondary volumes (P-Vols and S-Vols) are accessed from two different hosts, a Data Server
and Backup Server. In this situation, the Data Server has access only to the P-Vols, and Backup
Server sees only the S-Vols.

This constraint forces to reconstruct the metaset and metadevices on the secondary site before
accessing the data. There is no possibility of importing or exporting the metaset from one host to
the other (take and release ownership of a metaset implies that every disk is visible on both hosts)

ShadowImage is a track for track asynchronous copy technology at the logical device level. It has
no concept of file systems or applications, so the data must be properly prepared on the P-VOL by
the host to ensure data integrity.

A typical implementation would be as follows.

1. Pair create.
2. Quiesce the file system or place the database in hot Backup mode (e.g. SAP or Oracle backup
mode).
3. Flush and lock the file system cache with lockfs -w or an unforced umount on the
volume(requires database shutdown).
4. Split the pair
5. Unlock the filesystem with lockfs -u, or remount it
6. Take the database out of hot backup mode (SAP or Oracle backup mode).
7. Create the Metaset config on the Backup Server and take ownership.. If this config exists and
Backup server doesn't own the diskset, purge it and recreate the meta devices & meta volume
structure using the md.tab file (config from the primary production host).

============================================================
Example:

Create a metaset on secondary host:

root@Back1 # devfsadm
root@Back1 # metaset -s myds -a -h Back1

Populate the new metaset with cloned disks:


root@Back1 # metastat -s myds -a \
c3t500060E8000000000000ED160000020Ad0\
c3t500060E8000000000000ED160000020Bd0 \
c3t500060E8000000000000ED160000020Cd0 \
c3t500060E8000000000000ED160000020Dd0 \

Create new configuration for the secondary metaset.


Start by obtaining the metadevice configuration of the primary host:
root@Node1 # metastat -s myds -p
myds/d101 -p myds/d100 -o 1 -b 10485760
myds/d100 1 4 c3t500060E8000000000000ED160000020Ad0\
c3t500060E8000000000000ED160000020Bd0 \
c3t500060E8000000000000ED160000020Cd0 \
c3t500060E8000000000000ED160000020Dd0 \ -i 2048b

On the secondary host (Backup Server), create a metadevice configuration file called
/etc/lvm/md.tab containing the previous output with the correct secondary LUNs.

The order of appearance must be respected:

root@Back1 # cat /etc/lvm/md.tab


myds/d101 -p myds/d100 -o 1 -b 10485760
myds/d100 1 4 c3t500060E8000000000000ED160000020Ad0s0 \
c3t500060E8000000000000ED160000020Bd0s0 \
c3t500060E8000000000000ED160000020Cd0s0 \
c3t500060E8000000000000ED160000020Dd0s0 -i 2048b

Apply the metadevice configuration file to the replicated host:

root@Back1 # metainit -s myds -a


myds/d100: Concat/Stripe is setup
myds/d101: Soft Partition is setup

If you face any of the following issues, please follow the steps listed below them.

root@back1 # metainit -s myds -a


metainit: bakpsbs1: myds: must be owner of the set for this command

root@back1 # metaset -s myds -t


metaset: back1: myds: there are no existing databases

This is for the metaset clear problem that you may face.
Its a bug which is fixed by patch 113026-13 for Solaris 9.
This patch gives you new options in metaset :

-P metaset -s <setname> -P

-C metaset -s <setname> -C purge

These are used to clear the metasets with stale or no DB Replicas.


How to use the command
----------------------

In a non-cluster environment -
On each node within the configuration run the command:

metaset -s <setname> -P

In a Sun Cluster 3.x environment -


If the disk set is in the CCR (i.e. seen in the scstat -D output), on
each node within the configuration run the command:

metaset -s <setname> -P

The node on which the command is run will be removed from the CCR for
that set. On the last node the set will be completely removed from the CCR.

If the diskset is not in the CCR (i.e. not seen in the scstat -D output),
run the command:

metaset -s <setname> -C purge

This will cause the command to have no interaction with the Sun Cluster framework.
============================================================

8. Verify the file system on the S-VOL.

root@Back1 # fsck /dev/md/myds/rdsk/d101

9.Mount and verify the integrity of the database on the S-VOL if possible.

root@Back1 # mount /dev/md/myds/dsk/d101 /mnt/LAB

10. Backup the S-VOL.


11. Unmount the S-VOL.
12. Resync the S-VOL and repeat from step 2.

NOTE: The SVM config requires a recreation by using the md.tab entries, which may require a
metaset purge as well. Currently configurations using SVM cannot
avoid the procedure outlined in step 7.

You might also like