Professional Documents
Culture Documents
Objective:
Learn how to integrate, operate, VERITAS Storage Foundation in a Solaris environment. This training
provides instruction on operational management procedures for VERITAS Volume Manager (VxVM) and
you will learn how to install and configure VERITAS Volume Manager and how to manage disks, disk
groups, and volumes by using from the command line.For your kind information In this training , I am
not going to explain the theory in depth.
Instead of spending money on training courses , you can learn vxvm yourself by reading this blog and if
you have any doubt ,please leave a comment. I will get back to you as soon as possible.
Prerequisites:
1.Skills:Knowledge of UNIX system administration
2.Lab: VM Solaris 10 or physical machine with solaris 10,vxvm 5.0 and above version software.
VXVM objects:
There are several Volume Manager objects that must be understood before you can use the Volume
Manager to perform disk management tasks:
Physical objects:
1.Physical disks Its a physical disk or LUN from storage.
Virtual objects:
VM disks
A VM disk is a contiguous area of disk space from which the Volume Manager allocates storage.Its
nothing but a public region of disk.
Disk groups
A disk group is a collection of VM disks that share a common configuration.
Subdisks
A subdisk is a set of contiguous disk blocks.A VM disk can be divided into one or more subdisks.
Plexes
The Volume Manager uses subdisks to build virtual entities called plexes.A plex consists of one or more
subdisks located on one or more disks
Volumes
A volume is a virtual disk device that appears to applications, databases, and file systems like a
physical disk partition, but does not have the physical limitations of a physical disk partition.
In this tutorial,we are going to cover the below topics.In the end of these tutorials you will
very familiar with veritas volume manager and its operation.Your suggestions are always welcome
to improve these tutorials.
Download the packages from Symantec and keep the package under /var/tmp.
bash-3.00# ls -lrt
drwxr-xr-x
15 root
root
22 Nov 29
2011 dvd2-sol_x64
-rwx------
1 root
root
924151296 Oct
3 23:50
VRTS_SF_HA_Solutions_6.0_Solaris_x64.tar
bash-3.00# cd dvd2-sol_x64
bash-3.00# ls -lrt |grep installer
total 95
-rwxr-xr-x
1 root
root
5278 Nov 29
2011 installer
bash-3.00# ./installer
Logs are being written to /var/tmp/installer-201210032354IFr while installer
is in progress.
Storage Foundation and High Availability
Solutions 6.0 Install Program
Use the menu below to continue.
Task Menu:
P) Perform a Pre-Installation Check
I) Install a Product
G) Upgrade a Product
U) Uninstall a Product
L) License a Product
S) Start a Product
X) Stop a Product
?) Help
2)
3)
4)
5)
6)
7)
b)
2)
3)
4)
1)
2)
SF Standard HA
2)
SF Enterprise HA
b)
DEVICE
TYPE
disk_0 auto:ZFS
DISK
-
GROUP STATUS
-
ZFS
OS_NATIVE_NAME
c1t0d0s2
ATTR
-
Now you can see that init command created volboot for SF.
bash-3.00# cat /etc/vx/volboot
volboot 3.1 0.1 110
hostid node1
hostguid {b6208f08-0d8c-11e2-8046-000c2985ec00}.
Once the VXVM installation is completed ,you good to start work on the below things.
VxVM presents the disks in a disk array as volumes to the operating system
in below manner.
To avoid confusion with OS based names(i.e format),please check it,which is set in your system.
bash-3.00# vxddladm get namingscheme
NAMING_SCHEME
PERSISTENCE
LOWERCASE
USE_AVID
======================================================
Enclosure Based
Yes
Yes
Yes
TYPE
DISK GROUP
STATUS
disk_0
auto:none
online invalid
disk_1
auto:none
online invalid
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
As per the above output,the system was set to use enclosure based naming scheme. Anytime you can
change the naming scheme on the fly.There will be no impact by doing this.
To change the Operating system based naming scheme,
bash-3.00# vxddladm set namingscheme=osn
bash-3.00# vxdisk list
DEVICE
TYPE
DISK
GROUP
STATUS
c1t0d0s2 auto:ZFS
ZFS
c1t3d0
auto:none
online invalid
c1t4d0
auto:ZFS
ZFS
c1t5d0
auto:none
online invalid
TYPE
DISK
GROUP STATUS
OS_NATIVE_NAME ATTR
disk_0
auto:none
online invalid
c1t3d0
disk_1
auto:none
online invalid
c1t5d0
disk_2
auto:ZFS
ZFS
c1t4d0
disk_3
auto:ZFS
ZFS
c1t0d0s2
TYPE
DISK GROUP
disk_0 auto:cdsdisk -
online
c1t3d0
disk_1 auto:cdsdisk -
online
c1t5d0
disk_2 auto:ZFS
ZFS
c1t4d0
disk_3 auto:ZFS
ZFS
c1t0d0s2
disk_4 auto:cdsdisk -
online
c1t6d0
disk_5 auto:cdsdisk -
online
c1t2d0
disk_6 auto:cdsdisk -
online
c1t1d0
TYPE
disk_0
auto:cdsdisk
online
disk_1
auto:cdsdisk
online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
online
disk_5
auto:cdsdisk
online
disk_6
auto:cdsdisk
online
Diskgroup Operations
The vxdg utility performs various diskgroup operations that includes the creation of disk
groups, addition of disks in to diskgroup,removing disk from diskgroup.This command performs disk
group imports and deports as well.Here we are going to see how create a new diskgroup and adding
disks in existing diskgroup.In the end of the article we will see how to backup the diskgroup
configuration.
In the below output you can see, we have five disks which are in VXVM control. We have brought the
disk in to vxvm control using vxdisksetup.
uarena#vxdisk list
DEVICE
TYPE
disk_0
auto:cdsdisk
online
disk_1
auto:cdsdisk
online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
online
disk_5
auto:cdsdisk
online
disk_6
auto:cdsdisk
online
TYPE
DISK
GROUP
STATUS
disk_0
auto:cdsdisk
UXDISK1
UXDG
online
disk_1
auto:cdsdisk
UXDISK2
UXDG
online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
online
disk_5
auto:cdsdisk
online
disk_6
auto:cdsdisk
online
Now you can see we have created new diskgroup UXDG using disk_0 & disk_1 and we have assigned the
meaning full name to disks.
Task:2-Addition of disk:
If you want to add the new disk in to existing diskgroup, bring the disk in to vxvm control using
vxdisksetup and add it using below command.
uarena#vxdg -g UXDG adddisk UXDISK3=disk_4
uarena#vxdisk list
DEVICE
TYPE
DISK
GROUP
disk_0
auto:cdsdisk
UXDISK1
UXDG
disk_1
auto:cdsdisk
UXDISK2
UXDG
disk_2
auto:ZFS
disk_3
auto:ZFS
disk_4
auto:cdsdisk
UXDISK3
UXDG
disk_5
auto:cdsdisk
disk_6
auto:cdsdisk
-
STATUS
online
online
ZFS
ZFS
online
online
online
TYPE
DISK
GROUP
STATUS
disk_0
auto:cdsdisk
UXDISK1
UXDG
online
disk_1
auto:cdsdisk
UXDISK2
UXDG
online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
online
disk_5
auto:cdsdisk
online
disk_6
auto:cdsdisk
online
DEVICE
TAG
OFFSET
LENGTH
FLAGS
UXDISK1
disk_0
disk_0
143056
UXDISK2
disk_1
disk_1
143056
Task:5-Deporting diskgroup:
After un-mounting the volumes , you can deport the diskgroup.To see the imported diskgroup
uarena#vxdg list
NAME
STATE
UXDG
enabled,cds
ID
1364022395.37.sfos
TYPE
DISK
GROUP
STATUS
disk_0
auto:cdsdisk
online
disk_1
auto:cdsdisk
online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
online
disk_5
auto:cdsdisk
online
disk_6
auto:cdsdisk
online
TYPE
DISK
GROUP
STATUS
disk_0
auto:cdsdisk
(UXDG) online
disk_1
auto:cdsdisk
(UXDG) online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
online
disk_5
auto:cdsdisk
online
disk_6
auto:cdsdisk
online
STATE
ID
UXDG
enabled,cds
1364022395.37.sfos
auto:cdsdisk
UXDISK1
UXDG
online
disk_1
auto:cdsdisk
UXDISK2
UXDG
online
Sometimes you may need to use -C flag to import the cluster diskgroup to clear the VCS lock.
# vxdg -C import DG_NAME
Task:7-Re-Naming the Diskgroup:
You cant the rename the diskgroup while its in imported state.In a order to rename the DG, you need
to re-import with new name.
uarena#vxdg deport UXDG
uarena#vxdg -n NEWDG import UXDG
uarena#vxdg list
NAME
STATE
ID
NEWDG
enabled,cds
1364022395.37.sfos
14 root
drwxr-xr-x
2 root
# ls -lrt /var/tmp/UXDG.1364398489.45.sfos
total 48473
-rw-r--r-- 1 root root Mar28 17:38 1364398489.45.sfos.diskinfo
-rw-r--r-- 1 root root Mar28 17:38 1364398489.45.sfos.cfgrec
-rw-r--r-- 1 root root Mar28 17:39 1364398489.45.sfos.binconfig
-rw-r--r-- 1 root root Mar28 17:39 1364398489.45.sfos.dginfo
default
default
25000
1364474089.57.sfos
dm UXDISK1
disk_0
auto
65536
143056
dm UXDISK2
disk_1
auto
65536
143056
oravol1
ACTIVE
143360
SELECT
oravol1
ENABLED
ACTIVE
143360
pl oravol1-01
ENABLED
143056
304
fsgen
CONCAT
0
143056
disk_0
RW
ENA
disk_1 ENA
Volume Operations
VxVM builds volume using virtual objects of VM disks, disk groups, subdisks, and plexes.These virtual
objects can organized easily using vxassist command to create new volume.Here we are going to see
about how to create a new volumes with different layout and volume redundancy.In the end of the
article we see how to destroy the volume in details. Here the assumtion is we have configured
diskgroup with the name of UXDG.
bash-3.00# vxdg list
NAME
STATE
ID
UXDG
enabled,cds
1364022395.37.sfos
uxoravol1
pl uxoravol1-01
ENABLED
uxoravol1
ENABLED
- volume
pl
- Plex
ACTIVE
102400
ACTIVE
SELECT
102400
102400
CONCAT
0
disk_0
fsgen
-
RW
ENA
- uxoravol1
- uxoravol1-01
sd - Subdisk - UXDISK1-01
As per the above output,subdisk(UXDISK1-01) has been created using UXDISK1 and plex(uxoravol1-01) is
sitting on top of subdisk. On top of the plex virtual layer, volume is placed. These virtual layer provides
more flexible on volume level operation.
Once you have created the volume,you can use mkfs to create vxfs filesystem.To create VXFS
filesystem
# mkfs -F vxfs /dev/vx/rdsk/UXDG/uxoravol1
version 9 layout
102400 sectors, 51200 blocks of size 1024, log size 1024 blocks
rcq size 1024 blocks
largefiles supported
To create a new mountpoint and mount,
# mkdir /uxoravol1
# mount -F vxfs /dev/vx/dsk/UXDG/uxoravol1 /uxoravol1
# df -h /uxoravol1
Filesystem
size
used
/dev/vx/dsk/UXDG/uxoravol1 50M
3.1M
avail capacity
44M
7%
Mounted on
/uxoravol1
urstripe
- ENABLED
ACTIVE
RW
sd UXDISK1-01
urstripe-01 UXDISK1
68288
0/0
disk_0
ENA
sd UXDISK2-01
urstripe-01 UXDISK2
68288
1/0
disk_1
ENA
sd UXDISK3-01
urstripe-01 UXDISK3
68288
2/0
disk_4
ENA
size
used
/dev/vx/dsk/UXDG/urstripe 100M
3.1M
avail capacity
91M
Mounted on
4%
/stripeoravol1
uxvol2
ENABLED
ACTIVE
102400
SELECT
fsgen
ENABLED
ACTIVE
102400
CONCAT
RW
102400
pl uxvol2-02
ACTIVE
102400 CONCAT
102400
pl uxvol2-01
uxvol2
uxvol2
ENABLED
disk_0
-
0 disk_1
ENA
RW
ENA
Note:
Question may raise how to determine the mirrored volume in VXVM ?
Its so simple. If the volume constructed in the above manner with two or more plex is mirrored
volume.
volume with two plex one way mirror
volume with three plex two way mirror.
1.4 Mirrored-stripe or RAID-0+1(stripping + mirroring)
In this volume,striped plex will be mirrored with another striped plex.
Thanks to www.symantec.com
# vxassist -g UXDG make msvol 50M layout=mirror-stripe
# vxprint -hvt
Disk group: UXDG
v
msvol -
pl msvol-01
ENABLED
ACTIVE
102400 SELECT -
STRIPE2/128
fsgen
RW
0/0
disk_0
ENA
1/0
disk_1
ENA
pl msvol-02
RW
0/0
disk_4
ENA
1/0
disk_5
ENA
Thanks to www.symantec.com
# vxassist -g UXDG make smvol 50M layout=stripe-mirror
# vxprint -hvt
Disk group: UXDG
v
smvol -
ENABLED
pl smvol-03 smvol
ACTIVE
fsgen
STRIPE
2/128
RW
51200
0/0
2/2
ENA
51200
1/0
2/2
ENA
51200
fsgen
pl smvol-P01
ENABLED
ACTIVE
SELECT smvol-03
102400
smvol-L01
ENABLED
102400
ACTIVE
smvol-L01 ENABLED
SELECT
ACTIVE
51200 CONCAT
51200
pl smvol-P02
ACTIVE
51200 CONCAT
51200
smvol-L01 ENABLED
smvol-L02
ENABLED
pl smvol-P03 smvol-L02
ACTIVE
51200
ENABLED ACTIVE
- RW
disk_0 ENA
- RW
disk_4 ENA
SELECT
fsgen
51200 CONCAT
RW
51200
0 disk_1
ENA
pl smvol-P04
smvol-L02 ENABLED
51200
0 disk_5
RW
ENA
smvol
pl smvol-01
ENABLED
ACTIVE
smvol ENABLED
102432
ACTIVE
RAID
raid5
102432
RAID
4/32
RW
sd UXDISK1-01 smvol-01
UXDISK1
34144
0/0
disk_0
ENA
sd UXDISK2-01 smvol-01
UXDISK2
34144
1/0
disk_1
ENA
sd UXDISK3-01 smvol-01
UXDISK3
34144
2/0
disk_4
ENA
sd UXDISK4-01 smvol-01
UXDISK4
34144
3/0
disk_5
ENA
RW
disk_6
ENA
pl smvol-02 smvol
ENABLED
sd UXDISK5-01 smvol-02
LOG
UXDISK5
3840
0
CONCAT
3840
2.Removing volume.
1. Un-mount the volume.
2. Use vxassist to delete the volume.
# df -h /smvol
Filesystem
size
used
50M
3.1M
/dev/vx/dsk/UXDG/smvol
avail capacity
44M
7%
Mounted on
/smvol
Volume resize
Volume Resize:
VXVM is very flexible and reliable volume manager to resize volumes without un-mounting
it. we can re-size the volumes using vxassist and resize filesystem using fsadm.vxresize
command will perform both volume and filesystem re-size operation simultaneously.
Here we are going to see how to re-size the volume smvol in two different ways without unmounting it .
# df -h /smvol
Filesystem
/dev/vx/dsk/UXDG/smvol
size
used
50M
3.1M
avail capacity
44M
7%
Mounted on
/smvol
# df -h /smvol
Filesystem
/dev/vx/dsk/UXDG/smvol
size
used
60M
3.1M
avail capacity
53M
6%
Mounted on
/smvol
Step:1
Determining how much space we can increase the volume.
# vxassist -g UXDG maxsize layout=mirror
Maximum volume size: 71680(35Mb)
Step:2
Resize the volume & filesystem:
You can see the changes
# /etc/vx/bin/vxresize -g UXDG
smvol +10M
# df -h /smvol
Filesystem
/dev/vx/dsk/UXDG/smvol
size
used
70M
3.1M
avail capacity
63M
5%
Mounted on
/smvol
/usr/lib/fs/vxfs/fsadm
50M
fsadm:INFO:V-3-23586:/dev/vx/rdsk/UXDG/smvol
sectors
#
size
/dev/vx/dsk/UXDG/smvol 50M
is
will
df
Filesystem
Step:2
-b
currently
be
-h
size
3.1M
used
44M
7%
/smvol
122880
reduced
/smvol/
avail
capacity
/smvol
Mounted
on
Reduce the volume using vxassist.You will lose data if you resize the volume less than
filesystem size. So better to shrink the volume with some extra space.Here instead
of shrink to 50M , i am shrinking to 51MB.
#
vxassist
-g
UXDG
shrinkto
smvol
51M
VxVM vxassist ERROR V-5-1-7236 Shrinking a FSGEN or RAID5 usage type volume
can result in loss of data. It is recommended to use the "vxresize"command or
specify "-f" option to force the operation.
Note:You will above warning messages if you used without -f option.Handle with care.
#
vxassist
-g
UXDG
-f
shrinkto
smvol
51M
or
# vxassist -g UXDG -f shrinkby smvol 10M
Method:2 Decreasing the volume and filesystem using vxresize
(Recommended one )
# /etc/vx/bin/vxresize -g UXDG
smvol -10M
# df -h /smvol/
Filesystem
size
/dev/vx/dsk/UXDG/smvol 40M
used
3.1M
avail capacity
35M
9%
Mounted on
/smvol
uxoravol1
pl uxoravol1-01
uxoravol1
ENABLED
ACTIVE
ENABLED
102400
ACTIVE
SELECT
102400
102400
CONCAT
0
fsgen
-
disk_0
RW
ENA
uxoravol1
pl uxoravol1-01
uxoravol1
ENABLED
ENABLED
uxoravol1
ACTIVE
0
ENABLED
102400
ACTIVE
102400
102400
ACTIVE
CONCAT
0
102400
fsgen
-
RW
disk_0
102400 CONCAT
SELECT
0 disk_1
ENA
-
RW
ENA
concatv1 -
ENABLED
ACTIVE
102400
SELECT
fsgen
RW
concatv1-L01 - ENABLED
ACTIVE
102400 SELECT
ENA
fsgen
sd UXDISK1-02
concatv1-P01 UXDISK1
102400
0 disk_0
ENA
sd UXDISK2-02
concatv1-P02 UXDISK2
102400
0 disk_1
ENA
concatv1
- ENABLED
ACTIVE
concatv1-01 fsgen
102400 STRIPE
0/0
51200
51200 SELECT
concatv1-dp01 UXDISK1 0
pl concatv1-dp02 concatv1-d01
sd UXDISK3-01
v
ENA
1/0 2/2
ENA
CONCAT
51200
0 disk_0
ACTIVE
51200
51200 SELECT
0 disk_4
-
concatv1-dp03 UXDISK2 0
51200
RW
ENA
fsgen
RW
ENA
concatv1-dp02 UXDISK3 0
concatv1-d02 - ENABLED
2/2
fsgen
2/128 RW
0 disk_1
RW
ENA
concatv1-dp04 UXDISK4 0
51200
RW
0 disk_5
ENA
Step:2 Once the convert has been done,then use vxassist convert to convert the volume to
mirror-stripe.
# vxassist -g UXDG convert concatv1 layout=mirror-stripe
bash-3.00# vxprint -hvt
Disk group: UXDG
v
concatv1
ENABLED
ACTIVE
102400
SELECT
fsgen
pl concatv1-02
concatv1 ENABLED
2/128 RW
sd UXDISK1-01
concatv1-02
UXDISK1
51200
0/0 disk_0
ENA
sd UXDISK2-02
concatv1-02
UXDISK2
51200
1/0 disk_1
ENA
pl concatv1-03
concatv1 ENABLED
sd UXDISK3-02
concatv1-03
UXDISK3
51200
0/0 disk_4
ENA
sd UXDISK4-02
concatv1-03
UXDISK4
51200
1/0 disk_5
ENA
RW
PTID TYPE/STATE
PCT
PROGRESS
# vxtask monitor
TASKID PTID TYPE/STATE
PCT
PROGRESS
182
182
182
concatv1 - ENABLED
ACTIVE
102400
LOG
RAID
2880
CONCAT
raid5
-
RW
disk_1
102400
sd UXDISK1-03
concatv1-01
UXDISK1
51200
0/0 disk_0
ENA
sd UXDISK2-03
concatv1-01
UXDISK2 0
51200
1/0 disk_1
ENA
sd UXDISK3-02
concatv1-01
UXDISK3 0
51200
2/0 disk_4
ENA
RAID
ENA
3/32
RW
concatv1 - ENABLED
ACTIVE
2/128
RW
sd UXDISK0-02
concatv1-01
UXDISK0
0 51200
0/0 disk_0
ENA
sd UXDISK1-01
concatv1-01
UXDISK1
0 51200
1/0 disk_1
ENA
concatv1
- ENABLED
ACTIVE
pl concatv1-01 concatv1
102400
ENABLED
SELECT
concatv1-01 fsgen
ACTIVE 102400
STRIPE 2/128
RW
51200
0/0
2/2
ENA
51200
1/0
2/2
ENA
concatv1-d01 - ENABLED
ACTIVE
51200
SELECT
concatv1-dp01 UXDISK0 0
51200
concatv1-dp02 UXDISK2 0
concatv1-d02 -
ENABLED
ACTIVE
51200
51200
sd UXDISK1-01
concatv1-dp03 UXDISK1 0
51200
fsgen
51200 CONCAT
0
disk_0
disk_4
-
RW
ENA
fsgen
51200 CONCAT
0
RW
ENA
51200 CONCAT
SELECT
disk_1
ENA
RW
concatv1-dp04 UXDISK3 0
51200 CONCAT
51200
disk_5
RW
ENA
uxoravol1
pl uxoravol1-01
ENABLED
uxoravol1
ENABLED
ACTIVE
uxoravol1
ACTIVE
ENABLED
102400
SELECT
102400
CONCAT
102400
ACTIVE
102400 0
RW
disk_0
102400 CONCAT
fsgen
ENA
-
disk_1
RW
ENA
Method:1
Trying to delete the plex uxoravol1-02 ,
bash-3.00# vxedit -g UXDG -rf rm uxvol2-02
VxVM vxedit ERRORV-5-1-818 Plex uxvol2-02 is associated,cannot remove
Since the plex is attached to the volume, you cant delete it.First disassociate from the
volume using the below command.
# vxplex -g UXDG dis uxvol2-02
Deleting the disassociated plex,
# vxedit -g UXDG -rf rm uxvol2-02
Method:2
In other way you can delete the plex using single command,
# vxplex -g UXDG -o rm dis
uxvol2-02
# vxprint -hvt
Disk group: UXDG
v
uxoravol1
pl uxoravol1-01
uxoravol1
ENABLED
ACTIVE
102400
SELECT
fsgen
ENABLED
ACTIVE
102400
CONCAT
RW
sd UXDISK1-01
uxoravol1-01 UXDISK1
102400
disk_0
ENA
bash-3.00# df -h /uxvol2
Filesystem
size
/dev/vx/dsk/UXDG/uxvol2 50M
used
3.1M
avail capacity
44M
7%
Mounted on
/uxvol2
Command Syntax:
# vxassist [-b] [-g diskgroup] relayout volume [layout=layout] \
[relayout_options]
To monitor the relayout status:
# vxrelayout status volume
In similar way you can covert other existing volume layouts to new volume layout. Refer
storage foundation admin guide to find Permitted relayout transformations.
Veritas Dynamic Multi-Pathing provides greater availability, reliability to SAN paths and it
increase the SAN I/O using load balancing .SAN path redundancy ensured by path failover.
What is multi-pathing software ?
Server is connected to SAN using one or more fiber channel(FC) in different controllers.But
without multi-pathing software the UNIX operating system incorrectly interprets the two
path as leading to two storage units(LUN). By using multi pathing software,tow paths as
leading to the same storage unit.(LUN)
In the above diagram,LUN is created with name of enc0 and depending on the path, LUN
can be accessed by using c1t99d0 or c2t99d0.But vxdmp sees c1t99d0 & c2t99d0 as single
SAN unit. If we loose c1 (controller) still we can access the LUN via c2t99d0.
Active/Active (A/A)
Active/Passive (A/P)
ENCLR-TYPE
STATE
ENCLR-NAME
=====================================================
c2
EMC
ENABLED
emc0
c4
EMC
ENABLED
emc0
c0
Disk
ENABLED
disk
c1
Disk
ENABLED
disk
As per the above output,we have two controllers for emc0 enclosure.
ENCLR_SNO
==============================================================
emc0
EMC
000292704216 CONNECTED
A/A
331
disk
Disk
DISKS
Disk
CONNECTED
Here you can see ,we have two enclosure are available. emc0 (SAN) & disk(local disks)
disk_0
Disk
disk
c1t5d0 ENABLED(A)
disk_1
Disk
disk
c1t4d0 ENABLED(A)
disk_2
Disk
disk
=================================================================
c2t3000034578233D18d57s2 ENABLED(A)
c2
EMC
emc0
c4t3000034578233D24d57s2 ENABLED(A)
c4
EMC
emc0
To find who controls the path,i.e to find the enclosure based name.
# vxdmpadm getdmpnode nodename=c2t3000034578233D18d57s2
NAME
STATE
ENCLR-TYPE PATHS
ENBL
DSBL
ENCLR-NAME
================================================================
emc0_0e790 ENABLED
EMC
emc0
To see what are the disks are coming from emc0 enclosure,
# vxdmpadm getdmpnode enclosure=emc0
NAME
STATE
ENCLR-TYPE
PATHS
ENBL
DSBL
ENCLR-NAME
===============================================================
emc0_0e790
ENABLED
EMC
emc0
To disable controller,
# vxdmpadm listctlr all
CTLR-NAME
ENCLR-TYPE
STATE
ENCLR-NAME
===========================================================
c1
Disk
ENABLED
disk
c2
EMC
ENABLED
emc0
c4
EMC
ENABLED
emc0
CTLR-NAME
ENCLR-TYPE
STATE
ENCLR-NAME
===========================================================
c1
Disk
ENABLED
disk
c2
EMC
DISABLED
emc0
c4
EMC
ENABLED
emc0
To enable controller,
# vxdmpadm enable ctlr=c2
# vxdmpadm listctlr all
CTLR-NAME
ENCLR-TYPE
STATE
ENCLR-NAME
===========================================================
c1
Disk
ENABLED
disk
c2
EMC
ENABLED
emc0
c4
EMC
ENABLED
emc0
OPERATIONS
BLOCKS
PATHNAME
READS
WRITES
c1t0d0s2
13
221
c1t1d0
381
2396
207193
704884
24.01
16.28
c1t2d0
361
2567
207034
766221
15.02
20.96
c1t3d0
46792
9806
READS
AVG TIME(ms)
1336359
WRITES
0
804680
READS
20.51
1.10
WRITES
0.00
16.68
OPERATIONS
BLOCKS
AVG TIME(ms)
PATHNAME
READS
WRITES
READS
WRITES
READS
WRITES
c1t0d0s2
0.00
0.00
c1t1d0
0.00
0.00
c1t2d0
0.00
0.00
c1t3d0
0.00
0.00
STATUS
ARRAY_TYPE
LUN_COUNT
=================================================================
disk
Disk
DISKS
CONNECTED
Disk
DEFAULT
CURRENT
=========================================================
disk
MinimumQ
MinimumQ
DEFAULT
CURRENT
=========================================================
disk
MinimumQ
adaptive
adaptiveminq
balanced
minimumq
Balanced
priority
round-robin
singleactive
To stop DMP,
# vxdmpadm stop restore
To start DMP,
# vxdmpadm start restore
If its already running , you will get below warning messages.
# vxdmpadm start restore
VxVM vxdmpadm ERROR V-5-1-3243
You can stop and restart the restore daemon with desired arguments for
changing any of its parameters.
A snapshot is the state of a volume at a particular point in time. Veritas Volume Manager
snapshot capability for taking an image of a volume at a given point in time.It provides
various snapshot depends on the environments and product cost.
There are three type of snapshot in VXVM
1.Full-sized instant snapshot (using vxsnap)
2.Space-optimized instant snapshot (using vxsnap)
3.Mirror Break-off snapshot (vxassist or vxsnap)
4.Linked Break-off snapshot
Comparison of snapshot.
table.tableizer-table { border: 1px solid #CCC; font-family: Verdana, Verdana, Geneva, sans-serif; fontsize: 12px; } .tableizer-table td { padding: 4px; margin: 3px; border: 1px solid #ccc; } .tableizer-table th
{ background-color: #104E8B; color: #FFF; font-weight: bold; }
Snapshot feature
Full-sized
instant
Space-optimized
instant
Mirror
Break-off
Yes
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
No
No
Yes
No
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
No
No
Thanks to www.symantec.com
TYPE
DISK
GROUP
STATUS
disk_0
auto:cdsdisk
UXDISK1
UXDG
online
disk_1
auto:cdsdisk
UXDISK2
UXDG
online
disk_2
auto:ZFS
ZFS
disk_3
auto:ZFS
ZFS
disk_4
auto:cdsdisk
UXDISK3
UXDG
online
disk_5
auto:cdsdisk
UXDISK4
UXDG
online
disk_6
auto:cdsdisk
UXDISK5
UXDG
online
oravol1
ENABLED
ACTIVE
184320
SELECT
fsgen
ENABLED
ACTIVE
184320
CONCAT
RW
143056
disk_0
ENA
41264
disk_1
ENA
pl oravol1-01 oravol1
0
143056
# df -h /apporavol1/
Filesystem
size
used
/dev/vx/dsk/UXDG/oravol1 100M
avail capacity
43M
53M
Mounted on
45%
/apporavol1
oravol1
ENABLED
ACTIVE
184320
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
184320
sd UXDISK1-01
oravol1-01 UXDISK1
143056
sd UXDISK2-01
oravol1-01 UXDISK2
41264
dc oravol1_dco oravol1
v
oravol1_dcl
fsgen
CONCAT
RW
disk_0
ENA
143056 disk_1
ENA
oravol1_dcl
ENABLED
ACTIVE
67840
SELECT
gen
ENABLED
ACTIVE
67840
CONCAT -
RW
41264
67840
0 disk_1
ENA
pl oravol1_dcl-01
oravol1_dcl
sd UXDISK2-02
oravol1_dcl-01 UXDISK2
oravol1-snap -
ENABLED
pl oravol1-snap-01 oravol1-snap
sd UXDISK3-01
ACTIVE
20480
ENABLED
oravol1-snap-01 UXDISK3
3.Prepare snap-volume:
SELECT
20480
fsgen
-
RW
disk_4
ENA
You will get the below error if you didnt prepare the snap volume.
VxVM vxassist ERROR V-5-1-7061 Volume oravol1-snap is not instant ready
# vxsnap -g UXDG -b prepare oravol1-snap
v
oravol1-snap
ENABLED
ACTIVE
20480
ENABLED
ACTIVE
20480 CONCAT -
20480
pl oravol1-snap-01
oravol1-snap
sd UXDISK3-01
oravol1-snap-01 UXDISK3
dc oravol1-snap_dco oravol1-snap
v
oravol1-snap_dcl
SELECT
fsgen
RW
disk_4 ENA
oravol1-snap_dcl
ENABLED
67840
ENABLED
RW
67840
ENA
pl oravol1-snap_dcl-01 oravol1-snap_dcl
ACTIVE
SELECT
0
gen
disk_4
oravol1
ENABLED
ACTIVE
204800
SELECT
ENABLED
ACTIVE
204800
CONCAT
143056
pl oravol1-01
oravol1
sd UXDISK1-01
oravol1-01 UXDISK1
sd UXDISK2-01
oravol1-01 UXDISK2
dc oravol1_dco
v
oravol1_dcl
oravol1
-
61744
fsgen
-
ENA
disk_1
ENA
143056
disk_0
oravol1_dcl
ACTIVE
67840
SELECT
pl oravol1_dcl-01 oravol1_dcl
ENABLED
ACTIVE
67840 CONCAT
sd UXDISK2-02
UXDISK2
ENABLED
oravol1_dcl-01
sp oravol1-snap_snp oravol1
v
oravol1-snap
- ENABLED
SELECT
ENABLED
ACTIVE
204800 CONCAT -
20480
sd UXDISK4-01
oravol1-snap-01 UXDISK4
143056 20480
fsgen
RW
disk_4
ENA
disk_5
ENA
ENA
88320
oravol1-snap_dcl
ACTIVE
pl oravol1-snap_dcl-01 oravol1-snap_dcl
67840 SELECT
gen
sd UXDISK3-02 oravol1-snap_dcl-01
sp oravol1_snp
oravol1-snap_dco
oravol1-snap
RW
ENA
204800
oravol1-snap-01 UXDISK3
ENABLED
ACTIVE
sd UXDISK3-01
oravol1-snap_dcl -
gen
oravol1_dco
pl oravol1-snap-01 oravol1-snap
RW
ENA
size
used
avail capacity
/dev/vx/dsk/UXDG/oravol1-snap 100M
43M
53M
Mounted on
45%
/snaporavol1
2 root
root
96 Apr
1 14:04 lost+found
-rw------T
1 root
root
10485760 Apr
1 14:05 test1
-rw------T
1 root
root
10485760 Apr
1 14:05 run
-rw------T
1 root
root
10485760 Apr
1 14:05 unixarena
-rw------T
1 root
root
10485760 Apr
1 14:45 snaptest
size
/dev/vx/dsk/UXDG/oravol1 100M
used
avail capacity
63M
35M
65%
Mounted on
/apporavol1
You can see still,snapshot is holding the data with last refresh state.
# df -h /snaporavol1/
Filesystem
size
used
/dev/vx/dsk/UXDG/oravol1-snap 100M
43M
avail capacity
53M
45%
Mounted on
/snaporavol1
size
used
avail capacity
/dev/vx/dsk/UXDG/oravol1-snap 100M
63M
35M
Mounted on
65%
/snaporavol1
# ls -lrt /snaporavol1/snapshot-test1
-rw------T 1root root 20971520 Apr1 15:09/snaporavol1/snapshot-test1
In this way you can mount the snap volume and take a backup without impact the volume
performance for database filesystems if you are performing the backup on production
server.
9.SPLIT snap-volume to new diskgroup:
We can spit the snapshot to new diskgroup using the below procedure.Using this method,we
can import the snapshot diskgroup to backup servers for backup the volumes.
# vxdg split UXDG UXDG-SNAP oravol1-snap
VxVM vxdg ERROR V-5-1-4597 vxdg split UXDG UXDG-SNAP failed
oravol1-snap : Volume or plex device is open or attached
bash-3.00# umount /snaporavol1/
bash-3.00#
STATE
ID
UXDG
enabled,cds
1364804784.18.sfos
UXDG-SNAP
enabled,cds
1364810218.19.sfos
oravol1-snap
- ENABLED
ACTIVE
pl oravol1-snap-01 oravol1-snap
ENABLED
204800
ACTIVE
SELECT
fsgen
204800 CONCAT -
RW
ENA
sd UXDISK3-01
oravol1-snap-01 UXDISK3
20480
sd UXDISK4-01
oravol1-snap-01 UXDISK4
ENA
sd UXDISK3-03
oravol1-snap-01 UXDISK3
ENA
dc oravol1-snap_dco oravol1-snap
v
oravol1-snap_dcl - ENABLED
0 disk_4
oravol1-snap_dcl
ACTIVE
67840 SELECT
gen
CONCAT - RW
oravol1-snap
ENA
oravol1-snap_dco
size
used
avail capacity
/dev/vx/dsk/UXDG-SNAP/oravol1-snap 100M
63M
35M 65%
Mounted on
/snaporavol1
Once you split the snapshot volume to new diskgroup,you cant update the snapshot.
SNAPOBJECT
oravol1
TYPE
PARENT
--
volume
--
oravol1-snap_snp1
volume
--
oravol1-snap oravol1_snp
volume
SNAPSHOT
%DIRTY
--
--
100.00
0.00
--
oravol1-snap
oravol1
%VALID
--
0.00
2 root
root
96 Apr
1 14:04 lost+found
-rw------T
1 root
root
10485760 Apr
1 14:05 test1
-rw------T
1 root
root
10485760 Apr
1 14:05 run
-rw------T
1 root
root
10485760 Apr
1 14:05 unixarena
100.00
-rw------T
1 root
root
10485760 Apr
1 14:45 snaptest
-rw------T
1 root
root
20971520 Apr
1 15:09 snapshot-test1
#vxprint -hvt
Disk group: UXDG
v
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK6-01
oravol1-01
UXDISK6
204800
disk_7
ENA
fsgen
cachevol
ENABLED
ACTIVE
20480
SELECT
fsgen
pl cachevol-01
cachevol
ENABLED
ACTIVE
20480
CONCAT
RW
sd UXDISK1-01
cachevol-01
UXDISK1
20480
disk_0
ENA
cachevol
cacheobj
ENABLED
ACTIVE
20480
SELECT
pl cachevol-01
cachevol
ENABLED
ACTIVE
20480
CONCAT
RW
sd UXDISK1-01
cachevol-01
UXDISK1
20480
disk_0
ENA
fsgen
cacheobj
cachevol
cacheobj
ENABLED
ACTIVE
20480
SELECT
pl cachevol-01
cachevol
ENABLED
ACTIVE
20480
CONCAT
RW
sd UXDISK1-01
cachevol-01
UXDISK1
20480
disk_0
ENA
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK6-01
oravol1-01
UXDISK6
204800
disk_7
ENA
dc oravol1_dco
oravol1
oravol1_dcl
ENABLED
ACTIVE
67840
SELECT
gen
67840
CONCAT
RW
sd UXDISK6-02
67840
disk_7
ENA
204800
SELECT
CONCAT
RW
ENA
SELECT
gen
fsgen
oravol1
fsgen
oravol1_dcl
sp snap-oravol1_snp oravol1
oravol1_dco
ENABLED
snap-oravol1 -
ACTIVE
fsgen
snap-oravol1_dcl -
ENABLED
ACTIVE
67840
RW
sd UXDISK1-02
ENA
sp oravol1_snp
snap-oravol1 snap-oravol1_dco
disk_0
size
/dev/vx/dsk/UXDG/snap-oravol1 100M
used
63M
avail capacity
35M
65%
Mounted on
/snaporavol1
size
used
/dev/vx/dsk/UXDG/snap-oravol1 100M
avail capacity
74M
24M
Mounted on
76%
/snaporavol1
5.Detach/Attach snapshot:
Detach the snapshot from volume:
# vxsnap -g UXDG dis snap-oravol1
Attach the space optimized snapshot to volume.You can not use the reattach command for
space optimized snapshot.
# vxsnap -g UXDG reattach snap-oravol1 source=oravol1
VxVM vxplex ERROR V-5-1-6390 Cannot reattach space optimized snapshot to a
volume
Just refresh snapshot with source to reattach,
# vxsnap -g UXDG
-r rm snap-oravol1
# vxprint -hvt
Disk group: UXDG
v
cachevol
cacheobj ENABLED
ACTIVE
20480
SELECT
fsgen
pl cachevol-01
cachevol ENABLED
ACTIVE
20480
CONCAT
RW
sd UXDISK4-01
cachevol-01
20480
disk_5
ENA
oravol1
ENABLED
UXDISK4
ACTIVE
ENABLED
204800
pl oravol1-01
oravol1
sd UXDISK1-01
oravol1-01 UXDISK1
143056
sd UXDISK2-01
oravol1-01 UXDISK2
61744
dc oravol1_dco oravol1
v
oravol1_dcl
ACTIVE
SELECT
204800 CONCAT
fsgen
disk_0
RW
ENA
oravol1_dcl
ENABLED
ACTIVE
67840
SELECT
ENABLED
ACTIVE
67840
CONCAT - RW
61744
67840
0 disk_1 ENA
pl oravol1_dcl-01
oravol1_dcl
sd UXDISK2-02
oravol1_dcl-01 UXDISK2
- gen
If you get the below error ,possible cache object has been stopped,
# vxedit -g UXDG -rf rm snap-oravol1
VxVM vxedit ERROR V-5-1-10128
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
dc oravol1_dco
oravol1
oravol1
oravol1_dcl
fsgen
-
RW
ENABLED
oravol1_dcl
ACTIVE
67840
SELECT
pl oravol1_dcl-01
oravol1_dcl
ENABLED
sd UXDISK2-02
oravol1_dcl-01
UXDISK2
61744
67840
gen
-
RW
0 disk_1 ENA
we are going to see traditional mirror break-off snapshot.To perform this snapshot ,we need
free space equal to volume size on the diskgroup.It typically perform the volume mirror and
it will make the new plex as snapshot volume for backup operations.
High Level plan for backup of database volume using third-mirror break-off:
1.Prepare the volume for snapshot.
2.Add a mirror using the vxassist or vxsnap
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
fsgen
USING VXASSIST
1.Create the mirror plex
By end of this work, you can see the new plex created and its state in SNAPDONE
#vxassist -g UXDG snapstart oravol1
#vxprint -hvt
Disk group: UXDG
v
oravol1
ENABLED
ACTIVE
204800
SELECT
fsgen
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
pl oravol1-02
oravol1
ENABLED
SNAPDONE 204800
CONCAT
WO
sd UXDISK3-01
oravol1-02
UXDISK3
143056
disk_4
ENA
sd UXDISK4-01
oravol1-02
UXDISK4
61744
143056
disk_5
ENA
Note:If you have additional mirror configured with volume,you can use that
plex as snapshot using below command.
# vxplex -g DG_NAME convert state=SNAPDONE plex_name
2.To take the snapshot
This step will break the mirror plex in to separate volume.So that we can mount it in
different mount point and backup can be performed without touching the actual database
volume. (oravol1)
#vxassist -g UXDG snapshot oravol1
#vxprint -hvt
Disk group: UXDG
v
SNAP-oravol1 -
ENABLED
ACTIVE
204800
ROUND
pl oravol1-02
SNAP-oravol1 ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK3-01
oravol1-02
UXDISK3
143056
disk_4
ENA
sd UXDISK4-01
oravol1-02
UXDISK4
61744
143056
disk_5
ENA
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
fsgen
oravol1
fsgen
size
/dev/vx/dsk/UXDG/SNAP-oravol1 100M
used
74M
avail capacity
24M
76%
Mounted on
/snaporavol1
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
pl oravol1-02
oravol1
ENABLED
SNAPDONE 204800
CONCAT
WO
sd UXDISK3-01
oravol1-02
UXDISK3
143056
disk_4
ENA
sd UXDISK4-01
oravol1-02
UXDISK4
61744
143056
disk_5
ENA
fsgen
USING VXSNAP:
The above process can be done by using vxsnap command as well.
1.Prepare the volume for snapshot
# vxsnap -g UXDG prepare oravol1 ndcomirs=2 drl=off
bash-3.00# vxprint -hvt
Disk group: UXDG
v
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
dc oravol1_dco
oravol1
oravol1_dcl
fsgen
oravol1_dcl
ENABLED
ACTIVE
67840
SELECT
gen
67840
CONCAT
RW
sd UXDISK3-01
67840
disk_4
ENA
67840
CONCAT
RW
sd UXDISK4-01
67840
disk_5
ENA
oravol1_dcl-01 UXDISK3 0
oravol1_dcl-02 UXDISK4 0
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
pl oravol1-02
oravol1
ENABLED
SNAPATT
204800
CONCAT
WO
sd UXDISK6-01
oravol1-02
UXDISK6
204800
disk_7
ENA
dc oravol1_dco
oravol1
oravol1_dcl
ENABLED
ACTIVE
67840
SELECT
gen
67840
CONCAT
RW
sd UXDISK3-01
67840
disk_4
ENA
67840
CONCAT
RW
sd UXDISK4-01
67840
disk_5
ENA
67840
CONCAT
RW
sd UXDISK6-02
67840
disk_7
ENA
fsgen
oravol1_dcl
oravol1_dcl-01 UXDISK3 0
oravol1_dcl-02 UXDISK4 0
oravol1_dcl-03 UXDISK6 204800
sp oravol1_cpmap oravol1
oravol1_dco
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
pl oravol1-02
oravol1
ENABLED
SNAPDONE 204800
CONCAT
WO
sd UXDISK6-01
oravol1-02
UXDISK6
disk_7
ENA
fsgen
204800
dc oravol1_dco
oravol1
oravol1_dcl
ENABLED
oravol1_dcl
ACTIVE
67840
SELECT
gen
67840
CONCAT
RW
sd UXDISK3-01
67840
disk_4
ENA
67840
CONCAT
RW
sd UXDISK4-01
67840
disk_5
ENA
67840
CONCAT
RW
sd UXDISK6-02
67840
disk_7
ENA
oravol1_dcl-01 UXDISK3 0
oravol1_dcl-02 UXDISK4 0
oravol1_dcl-03 UXDISK6 204800
oravol1
ENABLED
ACTIVE
204800
SELECT
pl oravol1-01
oravol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK1-01
oravol1-01
UXDISK1
143056
disk_0
ENA
sd UXDISK2-01
oravol1-01
UXDISK2
61744
143056
disk_1
ENA
dc oravol1_dco
oravol1
oravol1_dcl
ENABLED
ACTIVE
67840
SELECT
gen
67840
CONCAT
RW
sd UXDISK3-01
67840
disk_4
ENA
67840
CONCAT
RW
sd UXDISK4-01
67840
disk_5
ENA
fsgen
oravol1_dcl
oravol1_dcl-01 UXDISK3 0
oravol1_dcl-02 UXDISK4 0
sp snaporavol1_snp oravol1
oravol1_dco
ENABLED
ACTIVE
204800
ROUND
pl oravol1-02
snaporavol1
ENABLED
ACTIVE
204800
CONCAT
RW
sd UXDISK6-01
oravol1-02
UXDISK6
204800
disk_7
ENA
67840
ROUND
gen
CONCAT
RW
sd UXDISK6-02
disk_7
ENA
sp oravol1_snp
snaporavol1
snaporavol1
fsgen
snaporavol1_dcl -
ENABLED
ACTIVE
67840
snaporavol1_dco
size
/dev/vx/dsk/UXDG/snaporavol1 100M
used
avail capacity
74M
24M
76%
Mounted on
/snaporavol1