Professional Documents
Culture Documents
Logical Interchange Format (LIF) HP Storage format used for interchange of files between HP systems. Logical Volumes Mirroring Physical Extent (PE) Physical Partitions Physical Volume Striping Volume Volume Group
Commands
Command lvcreate lvdisplay -v /dev/<VG>/<LV> pvchange pvcreate /dev/rdsk/cxtxdx pvcreate -f /dev/rdsk/cxtxdx pvdisplay /dev/dsk/cxtxdx pvdisplay -v /dev/dsk/cxtxdx vgchange vgcreate vgdisplay [-v] [VG_NAME} vgexport Function Command used to create logical volumes Command used to display information about logical volumes Changes the attributes of existing physical volumes. Creates a physical volume by creating the LVM data structures on the physical disk called physical extents. The -f option will force the creation of the PV potentially destroying old PV information. Command used to display information about physical volumes Command used to change information about the volume group, such as to activate an inactive vg Command used to create the volume groups Command used to display information about volume groups, -v is the verbose option which will list lvs and pvs.
vgimport
Files
File /dev/dsk/cxtxdx /dev/rdsk/cxtxdx /dev/<volumegroup>/group /etc/lvmrc /etc/lvmtab /etc/vpath.cfg Block device for disk Character device for disk Character device file (created with the mknod command) that allows the lvm kernel and the lvm commands to communicate Script that starts each volume group based upon the contents of /etc/lvmtab This file has the device file associated with each disk in a volume group. Access it with the strings command. If IBM SDD is being used this file contains the mapping of vpath to the corresponding disk-target-lun. Access it with the strings command. Description
6. Test 1. Do a "bdf <mount point>". This command will show if the file system is mounted, and it's size. 2. Do an "ls -ld <mount point>" to verify that the proper permissions are there. 3. Have the user test the filesystem, and report back with any problems Extending a VxFS Filesystem (Online JFS) 1. Verify that there is enough space in the volume group Check the volume group for enough space vgdisplay <VG Name> Check the logical volume to determine if mirroring or striping is in effect lvdisplay /dev/vg_name/lv_name If there is not enough space then the volume group must be extended - see procedure below. 2. First lvextend the logical volume - 2 Options for lvextend: 1. lvextend -L <New Size in MB> /dev/VG_NAME/LV_NAME 2. lvextend -l <NEW Size in # of LEs> /dev/VG_NAME/LV_NAME 3. fsadm -F vxfs -b <new size of filesystem in K> <MOUNT POINT> NOTE: extendfs will NOT work with Online JFS, you must use fsadm NOTE: new size of the filesystem = <# of LEs> * <LE Size in MB> * 1024 NOTE: If lvextend -L was used simply multipy size in MB by 1024 NOTE: If this FAILS with errno 28, then the filesystem is 100% full and must be reduced to less than 100%. NOTE: If this FAILS with "write failure at block XXXXXXXX : No such device or address" then the new size may be too big, use a smaller number Extending a VxFS Filesystem (No Online JFS) 1. Verify that there is enough space in the volume group vgdisplay <VG Name> If there is not enough space then the volume group must be extended - see procedure below. 2. First lvextend the logical volume - 2 Options for lvextend: 1. lvextend -L <New Size in MB> /dev/VG_NAME/LV_NAME 2. lvextend -l <NEW Size in # of LEs> /dev/VG_NAME/LV_NAME 3. Unmount the filesystem 4. extendfs -F vxfs /dev/<VG_Name>/r<LV_Name> Note: extendfs uses the raw logical volume name (ie rlvol1)
How to determine if you have Online JFS swlist -l fileset | grep -i advanced How to determine the filesystem type grep <mount point> /etc/fstab Results should show the filesystem type (vxfs or hfs) in the return string
it)? 2. vgdisplay <This will list existing volume groups 1. Spare capacity is indicated by the number of free PEs 2. Multiply the "Free PE" by "PE Size (MBytes)" to get spare capacity in MB of the volume group Note: If mirroring or striping is required then there must be enough space available on multiple disks 3. Is there a volume group with enough spare capacity? 1. If yes, the continue to "Create the Logical Volume" 2. If no, then the Volume Group must be extended or a new Volume Group must be created see procedures below. 2. Create the logical volume: lvcreate -1 <size in LEs> -n <LV Name> -r N <VG Directory path> OR 1. -l is size in Logical Extents (same as PE size) 2. -n is the name of the Logical Volume 3. -r N disable bad block allocation lvcreate -L <size in MBs> -n <LV Name> -r N <VG Directory path> 1. lvcreate -L 100 /dev/vg01 Logical volume will be 100MB in size Name of the logical volume default name will be lvolx 2. lvcreate -L 100 -n oracle /dev/vg01 This creates a logical volume called oracle of 100 MB 3. Verify the lv was created properly "vgdisplay -v <volume group>" Removing a Logical Volume Unmount the filesystem umount /<Path to Mount Point> Remove the volume group lvremove /dev/<VG_NAME>/<LV_NAME> Clean up Delete the mount point Remove the mount from /etc/fstab Troubleshooting (just an idea, not sure if this really works??) ONCTRA01# lvchange -a n /dev/vg_wte_test/lv_wte_test Logical volume "/dev/vg_wte_test/lv_wte_test" has been successfully changed. Volume Group configuration for /dev/vg_wte_test has been saved in /etc/lvmconf/vg_wte_test.conf Unmirroring a Logical Volume lvreduce -m 0 /dev/<VG_NAME>/<LV_NAME>
1. Determine the minor number to use 1. ls -l /dev/*/group | sort +5 2. Choose the next unused number 2. mknod /dev/vol_group_directory/group c Major_number Minor_number 1. mknod /dev/vg01/group c 64 0x010000 2. The group file is used to allow communications between the lvm commands and the lvm kernel 5. Create the volume group 1. vgcreate -l 255 -p 64 -s 16 /dev/<vg id>/dev/dsk/<disk id> <-- create the volume group with initial disk devices 1. -l sets the maximum number of logical volumes for the volume group 2. -p sets the maximum number of physical volumes for the volume group 3. -s sets the physical extent size 4. /dev/<vg id> is the volume group directory 5. /dev/dsk/diskid are the physical volumes to be added to the volume group 6. vgcreate /dev/vg01 /dev/dsk/c1t0d0 /dev/dsk/c1t1d0 2. vgextend /dev/vgxx /dev/dsk/diskid <-- adds additional disk devices to the volume group Extending a Volume Group A volume group is extended by adding additional PVs to it with vgextend vgextend /dev/vgxx /dev/dsk/diskid Note: If there are no available PVs, then disk space (LUNS) must be added to the system and PVs must be created - see procedures below. Activating a Volume Group 1. vgchange -a y /dev/<vg name> 2. mount -a ==> This may indicate that a filesystem check needs to be run (fsck) 3. fsck -m /dev/<vg name>/<lv name> ==> this is a sanity check, it is very quick, but may indicate that a complete fsck needs to be run 4. fsck /dev/<vg name>/<lv name> ==> this is the full file system check Deactivating a Volume Group 1. umount all of the logical volumes in the volume group Determine all of the mounted logical volumes by "bdf | grep vg_name" 2. vgchange -a n /dev/<vg name> Adding mirroring to the root vg - vg00 1. pvcreate -B /dev/rdisk/c2t2d0 Create the PV on the second disk so that LVM can manage it -B makes it a bootable volume 2. mkboot /dev/rdsk/c2t2d0 Installs the boot files on the second disk 3. mkboot -a "hpux-lq (;0)/stand/vmunix" /dev/rdsk/c2t2d0 Creates an autoboot file on the disk, with information on how to boot 4. vgextend /dev/vg00 /dev/dsk/c2t2d0 extends the volume group to include this second disk. 5. vgdisplay -v vg00 To verify that the second disk is now part of the volume group 6. lvlnboot -v To verify that the system thinks this disk can boot 7. lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c2t2d0
http://networktechnologist.com/tips-hpux-lvm.html[3/26/2012 6:38:16 PM]
This extends the logical volume to the mirrored disk, effectively creating the mirrored copy. This command needs to be run for each logical volume in the volume group Removing a Volume Group To remove a volume group using vgreduce and vgremove: Remove all Logical Volumes (see procedure above) remove all disks except on using vgreduce <VG_NAME> </dev/dsk/DISK> This will need to be done for each disk in the volume group, except for the last disk remove the final disk in the volume group with vgremove <VG_NAME> To remove a volume group using vgexport: Deactive the volume group vgexport -m <mapfile> -v -f <devicefile> <VG_NAME>
PV Procedures:
How to recognize a newly added LUN: 1. Check for the new hardware ioscan -fnC disk | more <-- the new devices will have a hardware path, but no device file associated with it. 2. Create the device file for the hardware path insf 3. If using vpaths then create the vpath association /opt/IBMdpo/bin/cfgvpath /opt/IBMdpo may not be the path of the sdd software, "whereis cfgvpath" may need to be run to find it if it is not in path. 4. Verify the new devices/vpaths are there: ioscan -fnC disk strings /etc/vpath.cfg /opt/IBMdpo/bin/showvpath 5. Create the PV For each disk device (not vpath) issue a "pvcreate /dev/rdsk/cxtxdx" Note for vpaths this information can be found in /etc/vpath.cfg How to create Physical Volumes: 3 Methods of Using pvcreate: pvcreate /dev/rdsk/cxtxdx Note this command uses the "r" version of the device pvcreate -f /dev/rdsk/cxtxdx Note: the -f option will overwrite existing pv data on the disk device Note this command uses the "r" version of the device pvcreate -B /dev/rdsk/cxtxdx Makes the PV boot capable Note this command uses the "r" version of the device How to determine available disks to be used in a Volume Group:
1. "ioscan -funC disk" will list all of the disk devices 1. Some of these devices will be allocated, some will not 2. "vgdisplay -v" will list all of the PVs and their devices for all of the existing volume groups 1. This is a list of the devices that are in use 3. Any devices that are on the ioscan, but are NOT on the vgdisplay are available for use 1. Possible strategy to automate this process using sed, awk and grep 1. create a file that has all of the disks that can be used (in this example HITACHI) 1. cat wte_disk_ioscan | sed 1,2d | grep -v -e TOSHIBA -e c2t0d0 | xargs -n 10 | grep HITACHI | grep -vi subsystem > wte_hitachi_disk 1. cat will output the file that contains the disk ioscan (ioscan -fnC disk) 2. sed is used to delete the first 2 header lines of the file. 3. Next grep is used to print any lines that DO NOT include TOSHIBA or c2t0d0 4. xargs is used to group the output into groups of 10 5. Next grep finds all of the lines with HITACHI in them 6. All of the lines that have HITACHI in them are saved to a file 2. Refine the ioscan of HITACHI disks to include just the disk devices, sorted - this is a list of all HITACHI disks on the system 1. awk '{ print $9 }' wte_hitachi_disk | sort -u > wte_hitachi_sorted_u 1. awk prints just the 9th field of the file 2. sort - u sorts the file and surpresses any duplicates 3. This is saved to a sorted file 3. Print a list of all the disks that are currently being used (a list of PVs) 1. vgdisplay -v | grep "PV Name" > wte_pvdisk_used 1. vgdisplay -v prints a verbose listing of all volume groups 2. grep only prints lines that contain PV Name 3. The list of PVs is saved to a file 4. Refine the list of disks that are being used 1. awk '{ print $3 }' wte_pvdisk_used | sort -u > wte_pvdisk_sorted_u 1. awk prints on the 3rd field (the disk device) 2. sort will sort the list, surpressing any duplicate entries 3. the results are saved to a file 5. Compare the 2 files - the list of all Hitachi disks on the system with the list of all disks being used 1. diff wte_hitachi_sorted_u wte_pvdisk_sorted_u 1. diff compares the 2 files and prints out any differences. The difference will be a disk that the system sees, but that is not being used by LVM How to remove PVs Identify the hardware path of the disk to remove ioscan -fnC disk Remove the special device file rmsf -H <HW Path from ioscan> How to perform an non-destructive test of a disk: dd if=/dev/rdsk/cxxxxxx of=/dev/null bs=1024k