Professional Documents
Culture Documents
Jul, 2011
Symantec
Subject
Version
Author
Comments
Symantec Consultant
1.1.0 ............................................................................................................................5
1.1.1 cluster .......................................................................................................................5
1.1.2 ...............................................................................................................................7
1.1.3 ...............................................................................................................................7
1.1.4 ...............................................................................................................................7
1.1.5 ...........................................................................................................................7
1.1.6 ............................................................................................................................7
1.1.7 ..................................................................................................7
1.1.8 flush ...................................................................................8
1.1.9 ............................................................................................................................8
1.1.10 cluster RAC/CFS .........................................................................................8
1.1.11 ..........................................................................................................................8
1.1.12 disk group ..............................................................................................................9
1.1.13 disk group ........................................................................................................9
1.1.14 volume ................................................................................................................9
1.1.15 disk group..............................................................................................................11
1.1.16 disk group ..................................................................................................11
1.1.17 stripe volume............................................................................................................11
1.1.18 ........................................................................................................................11
1.1.19 mount ....................................................................................................................11
1.1.20 cluster ...............................................................................................................11
1.1.21 volume manager ...............................................................................................12
1.1.22 volume...................................................................................................................12
1.1.23 disk group..............................................................................................................12
1.1.24 ......................................................................................................12
1.1.25 fencing...................................................................................................................13
1.1.26 shutdown ......................................................................................................................13
1.1.27 serial number.....................................................................................................13
1.1.28 license key.............................................................................................................13
1.1.29 license key.............................................................................................................13
1.1.30 .............................................................................................................13
1.1.31 cluster .........................................................14
1.1.32 HBA I/O..................................................................14
1.1.33 HBA ............................................................................................................14
1.1.34 oracle .......................................................................................15
1.1.35 cluster ............................................................................................................15
1.1.36 import disk group volume...........................................................................15
1.1.37 deport disk group...................................................................................................15
2 .............................................................................17
3 .....................................................................................31
3.1 dmp .........................................................................................................................31
3.1.1 .......................................................................................................................31
3.1.2 ........................................................................................................31
3.1.3 dmp .......................................................................................................31
3.1.4 controller .......................................................................................................31
3.1.5 ...............................................................................................................31
3.1.6 dmp I/O ........................................................................................................32
3.1.7 ................................................................................................................32
3.1.8 I/O .............................................................................................................32
3.1.9 asl apm ................................................................................................................34
3.1.10 ................................................................................................34
3.1.11 .............................................................................................................35
3.1.12 .....................................................................................................................35
3.1.13 cvm ............................................................................................................................36
3.3.1 .........................................................................................................................................38
3.3.2 dco version 20.................................................................................................................38
3.3.3 full-size instant snapshot.................................................................................................38
3.3.4 space-optimized instant snapshot...................................................................................40
3.3.5 emulation of third-mirror break-off snapshot....................................................................40
3.3.6 ..................................................................................................................................42
3.3.6.1 snapshot ..........................................................................................................................42
3.3.6.2 snapshot .............................................................................................................................42
3.3.6.3 mirror plex snapshot plex()......................................................................................42
3.3.6.4 snapshot plex mirror plex()......................................................................................42
3.3.6.5 snapdone plex........................................................................................................................42
3.3.6.6 .............................................................................................................................................42
3.3.6.7 snapshot.....................................................................................................................................42
3.4.1 ..................................................................................................................................43
3.4.2 ..................................................................................................................................44
3.4.3 ..................................................................................................................................46
3.4.4 ssb ...................................................................................................................47
3.5.1 ..................................................................................................................................48
3.5.2 site mirror.........................................................................................................................48
3.5.3 disk group site mirror........................................................................................49
3.5.4 ..................................................................................................................................49
1.1
1.1.0
SFRAC/SFCFS for AIX shutdown ry 0 reboot
shutdown ry 0 /etc/rc.d K cluster reboot
reboot errpt, fencing import disk group,
vxfenclearpre
1.1.1 cluster
rp84db1:/#hastatus sum cluster
-- SYSTEM STATE
-- System
State
Frozen
A rp84db1
RUNNING
A rp84db2
RUNNING
-- GROUP STATE
-- Group
System
Probed
AutoDisabled
State
B Oradb
rp84db1
ONLINE
B Oradb
rp84db2
ONLINE
B ccolap_sg
rp84db1
OFFLINE
B ccolap_sg
rp84db2
ONLINE
B cvm
rp84db1
ONLINE
B cvm
rp84db2
ONLINE
Rp84db1:/#hastatus cluster
attempting to connect....connected
group
resource
system
message
RUNNING
rp84db2
RUNNING
Oradb
rp84db1
ONLINE
Oradb
rp84db2
ONLINE
------------------------------------------------------------------------cvm
rp84db1
ONLINE
cvm
rp84db2
ONLINE
ccolap_sg
rp84db2
ONLINE
ccolap_sg
rp84db1
OFFLINE
CFSocrvote
rp84db1
ONLINE
------------------------------------------------------------------------CFSocrvote
rp84db2
ONLINE
CFSoradb
rp84db1
ONLINE
CFSoradb
rp84db2
ONLINE
CFSorafb
rp84db1
ONLINE
CFSorafb
rp84db2
ONLINE
------------------------------------------------------------------------DGocrvote
rp84db1
ONLINE
DGocrvote
rp84db2
ONLINE
DGora
rp84db1
ONLINE
DGora
rp84db2
ONLINE
vxfsckd
rp84db1
ONLINE
------------------------------------------------------------------------vxfsckd
rp84db2
ONLINE
cvm_clus
rp84db1
ONLINE
cvm_clus
rp84db2
ONLINE
cvm_vxconfigd
rp84db1
ONLINE
cvm_vxconfigd
rp84db2
ONLINE
------------------------------------------------------------------------ccolap_dg
rp84db2
ONLINE
ccolap_dg
rp84db1
OFFLINE
ccolap_oradb_vol
rp84db2
ONLINE
ccolap_oradb_vol
rp84db1
OFFLINE
ccolap_orafb_vol
rp84db2
ONLINE
------------------------------------------------------------------------ccolap_orafb_vol
rp84db1
OFFLINE
ccolap_etl_vol
rp84db2
ONLINE
ccolap_etl_vol
rp84db1
OFFLINE
ccolap_oradb_mnt
rp84db2
ONLINE
ccolap_oradb_mnt
rp84db1
OFFLINE
------------------------------------------------------------------------ccolap_orafb_mnt
rp84db2
ONLINE
ccolap_orafb_mnt
rp84db1
OFFLINE
ccolap_etl_mnt
rp84db2
ONLINE
ccolap_etl_mnt
rp84db1
OFFLINE
ccolap_ip
rp84db2
ONLINE
------------------------------------------------------------------------ccolap_ip
rp84db1
OFFLINE
ccolap_nic
rp84db2
ONLINE
ccolap_nic
rp84db1
ONLINE
ccolap_oracle
rp84db2
ONLINE
ccolap_oracle
rp84db1
OFFLINE
------------------------------------------------------------------------ccolap_listener
rp84db2
ONLINE
ccolap_listener
rp84db1
OFFLINE
1.1.2
#hagrp online <service group name> -sys <host name>
1.1.3
#hagrp offline <service group name> -sys <host name>
1.1.4
1.1.5
#hares offline <resource name> -sys <host name>
1.1.6
#hares online <resource name> -sys <host name>
1.1.7
#hares clear <resource name> -sys <host name>
1.1.8 flush
#hagrp flush <service group name> -sys <host name>
1.1.9
rp84db1:/#gabconfig -a
GAB Port Memberships
=======================================================
========
Port a gen 705d01 membership 01
Port b gen 705d07 membership 01
Port d gen 705d06 membership 01
Port f gen 705d0f membership 01
Port h gen 705d05 membership 01
Port o gen 705d04 membership 01
Port v gen 705d0b membership 01
Port w gen 705d0d membership 01
1.1.11
#vxdisk o alldgs list
DEVICE
TYPE
DISK
GROUP
STATUS
DEVICE
TYPE
DISK
GROUP
STATUS
c0t6d0
auto:LVM
LVM
c2t6d0
auto:LVM
LVM
c25t0d1
auto:cdsdisk
(vxfencoorddg) online
c25t0d3
auto:cdsdisk
(vxfencoorddg) online
c25t0d4
auto:LVM
c25t0d6
auto:cdsdisk
(vxfencoorddg) online
c25t1d0
auto:cdsdisk
(ccolapdg) online
c25t1d1
auto:cdsdisk
(ccolapdg) online
c31t0d2
auto:cdsdisk
ocrvotedg01 ocrvotedg
c31t0d5
auto:cdsdisk
oradg02
LVM
oradg
online shared
online shared
c31t0d7
auto:cdsdisk
oradg01
c31t1d2
auto:cdsdisk
oradg
online shared
(ccolapdg) online
STATE
oradg
ID
enabled,shared,cds 1237977903.46.rp84db1
ocrvotedg
enabled,shared,cds 1237979570.48.rp84db1
DISK
oradg01
DEVICE
TAG
c31t0d7
OFFSET
c31t0d7
ocrvotedg01 c31t0d2
LENGTH
FLAGS
398508160 20872960 -
c31t0d2
972800
26496
1.1.14 volume
Disk group: oradg
DG NAME
NCONFIG
ST NAME
STATE
NLOG
MINORS GROUP-ID
DM_CNT SPARE_CNT
DM NAME
DEVICE
RV NAME
RLINK_CNT
RL NAME
RVG
CO NAME
CACHEVOL
VT NAME
RVG
V NAME
PL NAME
VOLUME
SD NAME
PLEX
DISK
SV NAME
PLEX
SC NAME
PLEX
CACHE
DC NAME
PARENTVOL
SP NAME
SNAPVOL
EX NAME
ASSOC
SR NAME
KSTATE
dg oradg
default
TYPE
APPVOL_CNT
KSTATE STATE
KSTATE STATE
REM_HOST REM_DG
REM_RLNK
KSTATE STATE
KSTATE STATE
NVOLUME
KSTATE STATE
NCOL/WID MODE
LOGVOL
DCO
VC
PERMS
default 45000
MODE
STATE
1237977903.46.rp84db1
dm oradg01
c31t0d7
auto
32768
419381120 -
dm oradg02
c31t0d5
auto
32768
52379520 -
v oradbvol
pl oradbvol-01 oradbvol
fsgen
398458880 0
pl orafbvol-01 orafbvol
c31t0d7 ENA
-
fsgen
52379520 0
RW
RW
c31t0d5 ENA
NCONFIG
ST NAME
STATE
NLOG
MINORS GROUP-ID
DM_CNT SPARE_CNT
DM NAME
DEVICE
RV NAME
RLINK_CNT
RL NAME
RVG
CO NAME
CACHEVOL
VT NAME
RVG
V NAME
PL NAME
VOLUME
SD NAME
PLEX
DISK
SV NAME
PLEX
SC NAME
PLEX
CACHE
DC NAME
PARENTVOL
SP NAME
SNAPVOL
EX NAME
ASSOC
SR NAME
KSTATE
dg ocrvotedg
TYPE
APPVOL_CNT
KSTATE STATE
KSTATE STATE
REM_RLNK
KSTATE STATE
KSTATE STATE
default
REM_HOST REM_DG
KSTATE STATE
dm ocrvotedg01 c31t0d2
v ocrvotevol -
NVOLUME
LENGTH READPOL PREFPLEX UTYPE
LENGTH LAYOUT
NCOL/WID MODE
LOGVOL
DCO
VC
PERMS
default 35000
1237979570.48.rp84db1
auto
32768
MODE
STATE
999296 -
fsgen
10
RW
c31t0d2 ENA
1.1.18
#mkfs V vxfs o largefiles /dev/vx/rdsk/<disk group name>/<volume name>
1.1.19 mount
mount cluster :
#mount V vxfs o cluster,largefiles /dev/vx/dsk/<disk group name>/<volume name>
/<mount point name>
mount :
#mount V vxfs o largefiles /dev/vx/dsk/<disk group name>/<volume name> /<mount
point name>
1.1.20 cluster
#tail f /var/VRTSvcs/log/engine_A.log
2006/10/22 19:44:25 VCS NOTICE V-16-1-10447 Group ysdb_sg is online on system rp84db1
2006/10/22 19:44:25 VCS INFO V-16-6-15004 (rp84db1) hatrigger:Failed to send trigger for
nfs_restart; script doesn't exist
2006/10/22 19:44:25 VCS INFO V-16-6-15004 (rp84db1) hatrigger:Failed to send trigger for
postonline; script doesn't exist
2006/10/22 20:55:53 VCS INFO V-16-1-10077 Received new cluster membership
11
1.1.22 volume
#vxedit g <disk group name> -rf rm <volume name>
1.1.24
To resize a file system and a volume that contains it, the vxresize command can be
used. The command should be run from the cvm master node. If resizing the volume or
the file system independent of each other, then the command should be run from the cvm
master node or the cfs primary node respectively.
To determine the primary node for a file system in a cluster, type:
# fsclustadm v showprimary mount_point
12
1.1.25 fencing
You may have to disable fencing in the following cases:
The cluster has been upgraded to the latest SFCFS stack and the storage does not
During installation fencing was turned on but later you want to turn it off.
By default, the VxFEN driver operates with I/O fencing enabled. To disable this feature
without removing the coordinator disks, you must create the file /etc/vxfenmode and
include a string within the file to notify the VxFEN driver, then stop and restart the driver,
as instructed below:
# echo "vxfen_mode=disabled" > /etc/vxfenmode
# /etc/rc.d/rc2.d/S97vxfen stop
# /etc/rc.d/rc2.d/S97vxfen start
1.1.26 shutdown
reboot
shutdown Fr shutdown r y now
shutdown Fr
1.1.30
1:
vxdisk list <disk name>
13
state=enabled
hdisk77
state=enabled
hdisk115
state=enabled
hdisk153
state=enabled
disabled
I/O, enabled
I/O disabled
2:
1.1.31 cluster
/sbin/gabconfig c x
RAC/CFS
1.1.33 HBA
14
cfgmgr
vxdmpadm enable ctlr=<ctlr name>
1.1.34 oracle
# vxassist -g <disk group name> make <volume name> <size>
# vxedit -g <disk group name> set user=oracle group=dba mode=660 <volume name>
1.1.35 cluster
oracle listener cluster cluster
oracle listener hastop all cluster
archive log disk group deport
15
umount <mount_point>
,
cluster disk group disk group
vxdg deport <disk group>
16
2.1 cluster
cluster :
xwindows hagui
windows java cluster administrator
cluster
17
admin password
18
cluster
19
20
21
Resource
22
23
online(),offline()switch(),clear fault(),
freeze(),unfreeze(),flush(hang )
24
online()offline(),clear fault(),enabled(
),critical(),delete()
25
26
cluster
27
icon
28
clear fault
29
30
3
3.1 dmp
3.1.1
vxdiskadmPrevent multipathing/Suppress devices from VxVMs view
sun cluster
3.1.2
#vxdisk list
#vxdisk o alldgs list
#vxdisk path
#vxdisk e list
#vxdisk p list
3.1.3 dmp
#vxdmpadm list dmpnode all
# vxdmpadm getsubpaths ctlr=scsi2
# vxdmpadm getsubpaths enclosure=HDS9500V0
3.1.4 controller
# vxdmpadm listctlr all
# vxdmpadm getctlr c5
# vxdmpadm [-c|-f] disable ctlr=ctlr_name
# vxdmpadm enable ctlr=ctlr_name
3.1.5
vxdmpadm listenclosure all lun,
A/A,A/P,A/P-C,A/A-A
31
3.1.7
To upgrade the disk controller firmware
1 Disable the plex that is associated with the disk device:
# /opt/VRTS/bin/vxplex -g diskgroup det plex
(The example is a volume mirrored across 2 controllers on one HBA.)
2 Stop I/O to all disks through one controller of the HBA:
# /opt/VRTS/bin/vxdmpadm disable ctlr=first_cntlr
For the other controller on the HBA, enter:
# /opt/VRTS/bin/vxdmpadm -f disable ctlr=second_cntlr
3 Upgrade the firmware on those disks for which the controllers have been
disabled using the procedures that you obtained from the disk drive vendor.
:
a. scsi3key!!!
b. ,vxdctl enable
3.1.8 I/O
dmpI/O
32
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=fixedretry retrycount=n
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=timebound iotimeout=seconds
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=nothrottle
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type}\
recoveryoption=throttle {iotimeout=seconds|queuedepth=n}
# vxdmpadm gettune
Tunable
------------------------------
dmp_failed_io_threshold
57600
28800.
>>>
dmp_retry_count
5.
>>>
dmp_pathswitch_blks_shift
11
9.
>>>
dmp_queue_depth
32
32.
on
on.
>>>
dmp_cache_open
>>>
dmp_daemon_count
dmp_scsi_timeout
10
10.
30
30.
>>>
dmp_delayq_interval
dmp_path_age
15
300
dmp_stat_interval
dmp_health_time
60
dmp_probe_idle_lun
dmp_log_level
dmp_fast_recovery
15.
300.
1.
60.
on
on.
1.
on
on.
33
>>>
dmp_enable_restore
dmp_restore_policy
on
on.
check_disabled check_disabled.
dmp_restore_interval
300
300.
dmp_restore_cycles
10
10.
dmp_monitor_fabric
off
on.
3.1.10
A/A A/P jbod
# vxddladm addjbod vid=vendorid [pid=productid] \
[serialnum=opcode/pagecode/offset/length]
[cabinetnum=opcode/pagecode/offset/length] policy={aa|ap}]
# /etc/vx/diag.d/vxdmpinq /dev/hdisk10
34
#vxdctl enable
# vxddladm listjbod
3.1.11
# vxddladm get namingscheme
# vxddladm set namingscheme=ebn [persistence={yes|no}] \
[use_avid=yes|no] [lowercase=yes|no]
# vxddladm set namingscheme=osn [persistence={yes|no}] \
[lowercase=yes|no]
hp-ux 11.31 mode, new diskX dmp
3.1.12
SF5MP3
To regenerate the persistent names repository, use the following command:
# vxddladm [-c] assign names
To update the disk names so that they correspond to the new path names
1 Remove the file that contains the existing persistent device name database:
# rm /etc/vx/disk.info
# rm /dev/vx/rdmp/*
# rm /dev/vx/dmp/*
2 Restart the VxVM configuration demon:
# vxconfigd -k
This regenerates the persistent name database.
35
3.1.13 cvm
# vxdg -g diskgroup set diskdetpolicy=local dgfailpolicy=leave
# /etc/vx/bin/vxclustadm nodestate
# /etc/vx/bin/vxclustadm nidmap
3.2.2 key 1
HA
#hastop all
vxfen
/etc/rc.d/rc2.d/K98vxfen stop
vxfenclearpre
3.2.3 key 2
key key , vxfenclearpre
VXFENADM=/sbin/vxfenadm
GREP=/usr/bin/grep
AWK=/usr/bin/awk
for i in `lsdev -Ccdisk|grep EMC|awk '{ print $1 }'`
36
do
echo --------------echo checking vxfenadm /dev/r$i
key=`$VXFENADM -g /dev/r$i | $GREP Numeric | $AWK '{print $5}'`
if [ ! -z "$key" ]
then
echo "/dev/r$i" > /tmp/disk
for z in $key
do
#
# First make sure that we are not the
# owner of the key by deleting it.
#
$VXFENADM -x -K$z -f /tmp/disk > /dev/null 2>&1
done
for z in $key
do
#
# Even though it may have been our key,
# also do the register and preempt abort
# in case another node has the same key.
#
$VXFENADM -a -k"VERITASP" -f /tmp/disk > /dev/null 2>&1
$VXFENADM -p -V$z -k"VERITASP" -f /tmp/disk > /dev/null 2>&1
$VXFENADM -x -k"VERITASP" -f /tmp/disk
done
fi
done
3.2.4 key 3
, key
vxfenadm -a -k TMP -f /tmp/data_disks
vxfenadm -c -k TMP -f /tmp/data_disks
37
3.3 snapshot
3.3.1
Storage Foundation 5 5 snapshot :
1. traditional third-mirror break-off snapshot
vxassist dco 0, nbu
server-free
2. full-size instant snapshot
instant
38
6.syncing=onsyncing=off,
fsck V vxfs /dev/vx/dsk/diskgroup/snapvol
7.refreshsnapvol
Snapvolumount
8.!restoresnapvol,
umount
39
40
4.mirrorplex,SNAPDONE
5.,mirror,
# vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\
{/plex=plex1[,plex2,...]|/nmirror=number}}
6.
fsck V vxfs /dev/vx/dsk/diskgroup/snapvol
7.reattachsnapshotsnapvol
Snapvolumount
# vxsnap [-g diskgroup] reattach snapvolume|snapvolume_set \
source=volume|volume_set [nmirror=number]
8.!restoresnapvol,
umount
9.refresh
41
3.3.6
3.3.6.1snapshot
vxsnap make copy-on-write
# vxsnap [-g diskgroup] syncwait snapvol
vxsnap refreshcopy-on-write
# vxsnap [-g diskgroup] syncwait snapvol
vxsnap addmir mirror
# vxsnap -g mydg snapwait vol1 nmirror=2
vxsnap reattach mirror
# vxsnap -g mydg snapwait myvol nmirror=1
3.3.6.2 snapshot
# vxsnap -g mydg print
# vxsnap [-g diskgroup] -n [-l] [-v] [-x] print [vol]
3.3.6.3
dco020
# vxplex [-g diskgroup] -o dcoplex=dcologplex convert \
state=SNAPDONE plex
3.3.6.4
SNAPDONE
# vxplex [-g diskgroup] convert state=ACTIVE plex
3.3.6.5
snapdone plex
3.3.6.6
snapshot volume volume
# vxsnap [-f] [-g diskgroup] dis snapvolume
3.3.6.7
snapshot
disassociate volume
# vxsnap [-f] [-g diskgroup] dis snapvolume
42
3.3.7 snapshot
a.regionsizevoliomem_maxpool_sz
# vxsnap -g mydg -f unprepare vol1
# vxsnap -g mydg prepare vol1 regionsize=1M
regionsize
Use the vxprint command on the DCO to discover its region size (in blocks):
# RSZ=`vxprint [-g diskgroup] -F%regionsz $DCONAME`
16K64K
b.vxsnapvxassistiosizeslow
vxtasksetslow
#vxtask l list
#vxtask set slow=x tag
c.volpagemod_max_memsz
1TB volume,
1. Change "volpagemod_max_memsz" online temporarily with the following command
(notice the :value is followed by a "k"):
# vxtune volpagemod_max_memsz 65536k
2. To make the change permanent across reboots, add the following entry to the
/etc/vx/vxvm_tunables file by running the following command (notice the value is NOT
followed by a "k"):
# vxvoltune volpagemod_max_memsz 65536
aix smitty
solaris /kernel/drv/vxio.conf
3.4 mirror
3.4.1
1. layer volume, layer volume
layer volume dco volume subvolume
2.vxreattach
# As a part of incident 108818, it was decided that we will decide
# whether to use FMR for sync'ing plexes while reattach'ing the disks,
# depending upon the default file. Fmr will not be used if the default
# file does not exist
43
#
p_opt="-o plex:nofmr"
default_file="/etc/default/vxreattach"
/etc/default/vxreattach vxreattach
3.mirror=enclosure volume mirror=enclosure
:
a.echo "mirror=enclosure" >> /etc/default/vxassist
b.vxresize mirror=enclr
4. disable vxrelocd
/etc/init.d/vxvm-recover vxrelocd
5.vxsize layered volume layered volume
When a non-ISP volume is grown, its layout may be converted as a side effect if vxassist
determines that the new volume is too large for the original layout. The values of the
stripe-mirror-col-trigger-pt and stripe-mirror-col-split-trigger-pt attributes (by default, 1
gigabyte) control whether a new layout will be applied. A mirror-stripe volume that is larger
than the value of stripe-mirror-col-trigger-pt is converted to a stripe-mirror volume. If each
column of a stripe-mirror-col volume is larger than the value of stripe-mirror-col-splittrigger-pt, the volume is converted to a stripe-mirror-sd volume where the individual
subdisks, rather than the columns, are mirrored. A mirror-concat volume that is larger than
the value of stripe-mirror-col-split-trigger-pt is converted to a concat-mirror volume where
the individual subdisks, rather than the plexes, are mirrored.
# cat /etc/default/vxassist
stripe-mirror-col-trigger-pt=10g
stripe-mirror-col-split-trigger-pt=10g
3.4.2
1. fastresync
# vxprint -g <dg-name> -l <volume-name> | egrep '(Volume|flags)'
Volume: <volume-name>
flags: open writeback fastresync
44
Use the vxprint command on the DCO to determine its version number:
# vxprint [-g diskgroup] -F%version $DCONAME
3.drl logging
To determine if DRL is enabled on the volume, use the following command
with the volumes DCO:
# vxprint [-g diskgroup] -F%drl $DCONAME
Use the vxprint command on the DCO volume to find out if DRL logging is
active:
# vxprint [-g diskgroup] -F%drllogging $DCOVOL
4. mirror
# vxassist [-b] [-g diskgroup] mirror volume [storage_attribute]
Another way to mirror an existing volume is by first creating a plex, and then
attaching it to a volume, using the following commands:
# vxmake [-g diskgroup] plex plex sd=subdisk ...
# vxplex [-g diskgroup] att volume plex
5. mirror
# vxplex -g mydg dis vol01-02
# vxedit -g mydg -r rm vol01-02
6. disable
vxvol g diskgroup f start vol_name
45
7. disable mirror
#vxmend -g <diskgroup name> -o force off testvol-01
#vxmend -g <diskgroup name> fix clean testvol-01
vxvol -g <diskgroup name> start <volume>
#vxmend -g <diskgroup name> on testvol-02
#vxplex -g <diskgroup name> att <volume name> testvol-02
8.detach plex
# vxplex [-g diskgroup] det plex
9.attach plex
dco
# vxplex [-g diskgroup] att volume plex
3.4.3
1.regionsizevoliomem_maxpool_sz
# vxsnap -g mydg -f unprepare vol1
# vxsnap -g mydg prepare vol1 regionsize=1M
regionsize
Use the vxprint command on the DCO to discover its region size (in blocks):
# vxprint [-g diskgroup] -F%regionsz $DCONAME
16K64K
2.vxsnapvxassistiosizeslow
iosize8M
vxtasksetslow
#vxtask l list
#vxtask set slow=x tag
3.volpagemod_max_memsz
1TB volume,
a. Change "volpagemod_max_memsz" online temporarily with the following command
(notice the :value is followed by a "k"):
# vxtune volpagemod_max_memsz 65536k
46
b. To make the change permanent across reboots, add the following entry to the
/etc/vx/vxvm_tunables file by running the following command (notice the value is NOT
followed by a "k"):
# vxvoltune volpagemod_max_memsz 65536
aix smitty
solaris /kernel/drv/vxio.conf
4. read policy,
# vxvol [-g diskgroup] rdpol round volume
round
prefer
select
siteread
# vxvol [-g diskgroup] rdpol prefer volume preferred_plex
5.mirror striping
3.4.4 ssb
1:
vxsplitlines, vxdg o selectcp=<disk_id> import xxdg
2:
volume metadata configuration copy
vxdg o selectcp=<disk_id> import xxdg
disk_id vxdisk list disk id
configuration copy, vxdg list xxdg
vxprivutil scan/list/dumpconfig disk configuration copy
3:
disk group ssb
vxdg g xxdg set ssb=off
4:
vxprivutil ,
#vxprivutil set /dev/rdsk/c1t12d0s2 ssbid=0.2 ssbid
/etc/vx/diag.d/vxprivutil dumpconfig /dev/vx/dmp/Disk_2s2 ssbid
47
48
siteread,
# vxvol [-g diskgroup] rdpol siteread volume
allsite volume, site
siteconsistent dco
3.5.4
1. detach site
# vxdg -g diskgroup [-f] detachsite sitename
2. import site
49
50