Professional Documents
Culture Documents
P/N 300-000-616
REV A25
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Contents
Preface............................................................................................................................ 13
PART 1
Symmetrix Connectivity
Chapter 1
Contents
Chapter 2
Virtual Provisioning
Virtual Provisioning on Symmetrix ............................................... 36
Terminology ................................................................................ 37
Thin device .................................................................................. 38
Implementation considerations ...................................................... 41
Over-subscribed thin pools....................................................... 42
Thin-hostile environments ........................................................ 42
Pre-provisioning with thin devices in a thin hostile
environment ................................................................................ 43
Host boot/root/swap/dump devices positioned on
Symmetrix VP (tdev) devices ................................................... 44
Cluster configurations ............................................................... 45
Symmetrix Virtual Provisioning in a Tru64 UNIX
environment ...................................................................................... 46
Tru64 UNIX Virtual Provisioning support............................. 47
Precaution considerations ......................................................... 47
Unbound thin devices................................................................ 48
Chapter 3
Contents
Chapter 4
Chapter 5
TruCluster Servers
TruCluster V1.6 overview ................................................................ 76
Available Server ..........................................................................76
Production Server .......................................................................76
TruCluster V1.6 services ............................................................76
asemgr...........................................................................................77
TruCluster V1.6 daemons and error logs ................................78
TruCluster V1.6 with Symmetrix .................................................... 79
Symmetrix connectivity .............................................................79
Symmetrix configuration ...........................................................81
Additional documentation ........................................................82
TruCluster V5.x overview ................................................................ 83
Connection manager...................................................................83
Device request dispatcher..........................................................84
Cluster File System .....................................................................85
Cluster Application Availability...............................................86
TruCluster V5.x with Symmetrix .................................................... 87
Symmetrix connectivity .............................................................87
TruCluster V5.x system disk requirements.............................89
Symmetrix configuration ...........................................................89
Direct-access device and DRD barrier configuration ............90
Persistent reservations................................................................94
Additional documentation ........................................................95
PART 2
Chapter 6
Contents
PART 3
Appendix
Appendix A
136
136
136
137
137
Contents
Contents
Figures
Title
1
2
3
4
5
6
7
8
9
10
11
12
Page
Figures
10
Tables
Title
1
2
3
4
5
6
Page
11
Tables
12
Preface
13
Preface
Related
documentation
Conventions used in
this guide
IMPORTANT
An important notice contains information essential to operation of
the software.
CAUTION
A caution contains information essential to avoid damage to the
system or equipment. The caution may apply to hardware or
software.
14
Preface
Typographical conventions
EMC uses the following type style conventions in this guide:
Normal
Bold:
Italic:
Courier:
Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when
shown outside of running text
Courier bold:
Used for:
Specific user input (such as commands)
Courier italic:
<>
[]
{}
...
15
Preface
Your comments
16
PART 1
Symmetrix Connectivity
Part 1 includes:
1
Tru64 UNIX/Symmetrix
Environment
Overview .............................................................................................
Enginuity minimum requirements..................................................
Tru64 UNIX commands and utilities ..............................................
Tru64 UNIX devices...........................................................................
Using file systems ..............................................................................
Logical storage manager ...................................................................
System and error messages...............................................................
20
21
24
25
27
31
35
19
Overview
When using an EMC Symmetrix system in the Tru64 UNIX
environment, note the following:
20
Tru64 V5.1B-0
5876.82.57
Tru64 V5.1B-1
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
Tru64 V5.1B-6
Symmetrix VMAX 20K
Tru64 V5.1B-0
5876.82.57
Tru64 V5.1B-1
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
Tru64 V5.1B-6
Symmetrix VMAX
Tru64 V5.1B-0
Tru64 V5.1B-1
5874.121.102
5875.135.91
5876.82.57
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
Tru64 V5.1B-6
21
Table 1
Symmetrix DMX-4
Tru64 V5.1B-0
5772.83.75
5773.79.58
Tru64 V5.1B-1
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
Tru64 V5.1B-6
Symmetrix DMX-3
Tru64 V5.1B-0
Tru64 V5.1B-1
5771.68.75
5772.55.51
5773.79.58
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
Tru64 V5.1B-6
Symmetrix DMX-2
Tru64 V5.1B-0
Tru64 V5.1B-1
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
22
5671.58.64
Table 1
Symmetrix DMX
Tru64 V5.1B-0
5670.23.25
5671.31.35
Tru64 V5.1B-1
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
Symmetrix 8000
Tru64 V5.1B-0
5568.34.14
Tru64 V5.1B-1
Tru64 V5.1B-2
Tru64 V5.1B-3
Tru64 V5.1B-4
Tru64 V5.1B-5
23
24
Command or utility
Definition
disklabel
Displays and partitions a disk device. Some useful parameters for this
command are:
-r read label from disk
-rw write label to disk
-re use your default editor to change the default partition sizes
scu
newfs
mkfdmn, mkfset
Creates a new Advanced File System (AdvFS). AdvFS provides rapid crash
recovery, high performance, and the ability to manage the file system while it
is on line.
scsimgr
Creates device special files for newly attached disk and tape devices. This
V4.0x operating system utility is automatically invoked at system boot time.
hwmgr
dsfmgr
where:
[lun] is a letter ranging from b to h, corresponding to LUNs
1 through 7. For LUN 0, the LUN letter is omitted.
[unit] is bus number * 8 + target ID number.
[partition] is the letter of the disk partition, from a to h.
Example:
/dev/rrzb16c is the raw device name for partition c of the disk at
bus 2 target 0 LUN 1.
/dev/rz28h is the block device name for partition h of the disk at
bus 3 target 4 LUN 0.
V5.x
In the 5.x versions of the Tru64 UNIX operating system, device names
(device special files) are only created for device LUNs that report
unique device identifiers, and not for every bus-target-LUN instance
visible to the system. If the same device identifier (WWID) is reported
from multiple bus-target-LUN instances, Tru64 will only create one
device name for what it considers to be one unique device. Tru64 V5.x
can support multipath configurations and provide path failover and
load balancing to devices. The bus-target-LUN paths that reported
the same WWID are grouped together as the available paths of a
device. The device names in Tru64 UNIX V5.x have the following
format:
dsk[unit][partition]
where:
[unit] is a number assigned sequentially to new devices (with
unique WWIDs) when they are discovered and configured by the
operating system.
Tru64 UNIX devices
25
Figure 1
26
2. Create a directory:
mkdir /symm
Use the df command to show all mounted file systems and the
available free space.
AdvFS
Understanding the following concepts prepares you for planning,
creating, and maintaining an Advanced File System (AdvFS).
Volumes
File domain
27
A fileset is both the logical file structure that the user recognizes and a
unit that you can mount. Whereas you typically mount a whole
UNIX file system, with the AdvFS you mount the individual filesets
of a file domain.
An AdvFS consists of a file domain with at least one fileset that you
create using the mkfset command.
3. Create a directory:
mkdir /symm1
28
Use the df command to show all mounted file systems and the
available free space.
LUN expansion
The AdvFS and UFS file systems on Tru64 UNIX can support
expanded LUNs. AdvFS file systems can be extended on hosts with
Tru64 UNIX V5.1B or later installed. UFS file systems can be extended
on hosts with Tru64 UNIX V5.1 or later installed.
The disk label of an expanded LUN must be updated before the new
capacity can be used by file systems. Disk partition sizes can be
increased to the new capacity, but the disk offsets of in-use disk
partitions must not be changed. The disk label updates should only
be done by experienced system administrators. Partitioning and
sizing errors in disk label updates can cause data loss. A data backup
is recommended before expanding a LUN.
29
5. Rewrite or edit the existing disk label to reflect the new LUN
capacity. Increase the size of the disk partition containing the file
system to be extended. Do not change the offsets of any disk
partitions that are used or open:
disklabel -w <dsk_name>
disklabel -re <dsk_name>
30
These three disks are added to your setup as disk01, disk02, and
disk03.
2. To add more disks:
For an entire disk, type a command similar to the following:
voldiskadd rzb16
31
32
sd
sd
sd
sd
s1-sd
s2-sd
s3-sd
s4-sd
rz16,0,500m
rz32,0,500m
rz40,0,500m
rz48,0,500m
33
34
To check SCSI errors, use either the dia command or the uerf
command. Before you can use the dia command, the DECevent
software subset must be installed. The subset can be found on the
Associated Products Volume 2 CD.
35
36
Virtual Provisioning
Virtual Provisioning
35
Virtual Provisioning
Figure 2
36
Virtual Provisioning
Terminology
This section provides common terminology and definitions for
Symmetrix and thin provisioning.
Symmetrix
Thin provisioning
Device Capacity
Device Extent
Internal Device
Storage Pool
Data device
Thin pool
37
Virtual Provisioning
Bind
Pre-provisioning
Thin device
Symmetrix Virtual Provisioning introduces a new type of
host-accessible device called a thin device that can be used in many of
the same ways that regular host-accessible Symmetrix devices have
traditionally been used. Unlike regular Symmetrix devices, thin
devices do not need to have physical storage completely allocated at
the time the devices are created and presented to a host. The physical
storage that is used to supply disk space for a thin device comes from
a shared thin storage pool that has been associated with the thin
device.
38
Virtual Provisioning
39
Virtual Provisioning
Figure 3
40
Virtual Provisioning
Implementation considerations
When implementing Virtual Provisioning, it is important that realistic
utilization objectives are set. Generally, organizations should target
no higher than 60 percent to 80 percent capacity utilization per pool.
A buffer should be provided for unexpected growth or a runaway
application that consumes more physical capacity than was originally
planned for. There should be sufficient free space in the storage pool
equal to the capacity of the largest unallocated thin device.
Organizations also should balance growth against storage acquisition
and installation timeframes. It is recommended that the storage pool
be expanded before the last 20 percent of the storage pool is utilized
to allow for adequate striping across the existing data devices and the
newly added data devices in the storage pool.
Thin devices can be deleted once they are unbound from the thin
storage pool. When thin devices are unbound, the space consumed
by those thin devices on the associated data devices is reclaimed.
Note: Users should first replicate the data elsewhere to ensure it remains
available for use.
Implementation considerations
41
Virtual Provisioning
Thin-hostile environments
There are a variety of factors that can contribute to making a given
application environment thin-hostile, including:
42
Virtual Provisioning
Again, the host administrator may want to fully allocate the thin
devices underlying these volumes before assigning them to the
thin-hostile application.
Implementation considerations
43
Virtual Provisioning
44
Virtual Provisioning
Cluster configurations
When using high availability in a cluster configuration, it is expected
that no single point of failure exists within the cluster configuration
and that one single point of failure will not result in data
unavailability, data loss, or any significant application becoming
unavailable within the cluster. Virtual provisioning devices (thin
devices) are supported with cluster configurations; however,
over-subscription of virtual devices may constitute a single point of
failure if an out-of-space condition should be encountered. To avoid
potential single points of failure, appropriate steps should be taken to
avoid under-provisioned virtual devices implemented within high
availability cluster configurations.
Implementation considerations
45
Virtual Provisioning
Performance considerations
Data written to thin devices is striped across data devices of the
related thin pool (or thin pools) the thin devices are bound to.
This can alleviate back-end contentions or compliment other
methods of alleviating contentions, such as host-based striping.
46
Virtual Provisioning
Precaution considerations
EMC Virtual Provisioning and the industrys thin provisioning are
new technologies. Relevant industry specifications have not yet been
drafted. Virtual Provisioning, like thin provisioning, has the potential
to introduce events into the environment which would not otherwise
occur. The unavailability of relevant industry standards results in
deviations with the host-based handling of these events and the
possibility of undesirable implications when these events occur.
However, with the proper precautions these exposures can be
minimized or eliminated.
Thin pool out-of-space event
Insufficient monitoring of the thin pool can result in all of the thin
pool enabled capacity to be allocated to thin devices bound to the
pool. If over-subscription is implemented, the thin pool out-of-space
event can result in a non-recoverable error being returned to a write
request when it is sent to a thin device area that does not have
capacity allocated from the thin pool. Simple precautions can avoid
this from occurring, including the following:
47
Virtual Provisioning
Plan for thin pool enabled capacity utilization not to exceed 60%
80%.
48
Virtual Provisioning
49
Virtual Provisioning
50
51
Hardware connectivity
Refer to the EMC Support Matrix or contact your EMC representative
for the latest information on qualified hosts, host bus adapters, and
connectivity equipment.
Logical devices
LUNs are supported as follows:
OS version
Tru64 V4.0F/G
Tru64 V5.x
a. Each Symmetrix Fibre Channel director port is a single Fibre Channel target.
52
Symmetrix configuration
Symmetrix configuration is done by an EMC Customer Engineer (CE)
through the Symmetrix service processor.
Note: Refer to the following paragraphs and to the EMC Support Matrix for
required bit settings on Symmetrix Fibre Channel directors.
Configuring the
Symmetrix for Tru64
UNIX V5.x
Set the following director bits for each port attached to Tru64
UNIX and TruCluster V5.x Fibre Channel environments:
OVMS
P2P (Point-to-point)
UWN (Unique worldwide name)
Table 3
All Fibre Channel director ports configured for Tru64 V5.x hosts
must have a LUN 000 device mapped. Not mapping a LUN 000
device can cause conflicting duplicate WWIDs and possible
bootup problems on the Tru64 host.
Symmetrix model
Microcode
5876
5876
Symmetrix VMAX
5876
5875
5874
53
Table 3
Symmetrix model
Microcode
5773
5671
5670, 5671
Yes
Symmetrix 8000
55xx
54
Change the device label of the VCMDB device from SYM to VCM,
especially if the VCMDB device is logical device 000. For
example, change the label of the VCMDB device 000 from
SYM000 to VCM000.
Port sharing
Tru64 UNIX V5.x hosts require special director bit settings different
from Tru64 UNIX V4.0x hosts. If a Symmetrix Fibre Channel director
port will be shared by Tru64 UNIX hosts, the configuration options
are as follows:
Set the OVMS director bit on the port as required for Tru64 UNIX
V5.x hosts. If the port is on a Symmetrix 8000 system, LUN 000
will not be usable. A Tru64 UNIX V4.0x host can normally
support up to 8 devices per port, but only 7 devices (LUN
addresses 001 007) will be usable by the Tru64 UNIX V4.0 host
when the OVMS director bit is set. On Symmetrix DMX systems,
the maximum 8 devices (LUN addresses 000 007) will be usable
by Tru64 UNIX V4.0x hosts even when the OVMS director bit is
set.
55
Check that the adapters are set up properly for fabric topology by
using the WWIDMGR utility (wwidmgr -show adapter). Format the
NVRAM (wwidmgr -set adapter -item 9999 -topo fabric) for fabric
support if necessary.
56
57
If the variable was not set, look for the appropriate device in the
show dev output and set it manually. (The Fibre Channel device
has the format dg[N][UDID value].)
Example:
set bootdef_dev dga32.1001.0.3.1
58
59
V4.0F/V4.0G notes
The Tru64 UNIX emx Fibre Channel driver provides persistent
binding functionality. Each World Wide Name (WWN) found is
mapped to a target ID. This mapping persists across reboots and
configuration changes; however, only the initial seven WWN/target
ID mappings are available to the CAM SCSI subsystem.
Refer to the emx and emx_data.c operating system man pages for
information on modifying the target ID mappings in the /etc/emx.db
database.
If a V4.0F/V4.0G host has had multiple Fibre Channel configuration
changes or was connected to an unzoned switch, all seven valid
target IDs may have already been assigned. When a valid target ID
must be freed, or a specific WWN must be mapped to a specific target
ID (such as for TruCluster 1.x), the following example shows the
procedure for modifying the database:
1. View and copy the existing configuration from the file
/etc/emx.info:
emx?
{ 0,
{ 0,
{ 0,
{ 0,
{ 0,
{ 0,
{ 0,
{ 0,
{ 0,
tgtid
FC Port Name
0, 0x0650, 0x8204, 0x60bc,
1, 0x0650, 0x8104, 0xdaa7,
2, 0x0650, 0x8204, 0x61bc,
3, 0x0650, 0x8204, 0x31c0,
4, 0x0650, 0x8204, 0x31c0,
5, 0x0010, 0x0000, 0x21c9,
6, 0x0010, 0x0000, 0x20c9,
7, 0x0010, 0x0000, 0x20c9,
8, 0x0010, 0x0000, 0x21c9,
FC Node Name
0x4fed, 0x0650,
0xd7f1, 0x0650,
0x4f06, 0x0650,
0x4e7c, 0x0650,
0x5e7c, 0x0650,
0xd378, 0x0010,
0xb6cf, 0x0010,
0x82d0, 0x0010,
0x827f, 0x0010,
0x8204,
0x8104,
0x8204,
0x8204,
0x8204,
0x0000,
0x0000,
0x0000,
0x0000,
0x60bc,
0xdaa7,
0x61bc,
0x31c0,
0x31c0,
0x21c9,
0x20c9,
0x20c9,
0x21c9,
0x4fed
0xd7f1
0x4f06
0x4e7c
0x5e7c
0xd378
0xb6cf
0x82d0
0x827f
},
},
},
},
},
},
},
},
},
60
EMX_FCPID_RECORD emx_fcpid_records[] = {
/* Insert records below here */
emx?
{
{
{
{
{
{
{
{
{
tgtid
0, -1,
0, -1,
0, -1,
0, 0,
0, 2,
0, -1,
0, -1,
0, -1,
0, -1,
FC Port
0x0650,
0x0650,
0x0650,
0x0650,
0x0650,
0x0010,
0x0010,
0x0010,
0x0010,
Name
0x8204,
0x8104,
0x8204,
0x8204,
0x8204,
0x0000,
0x0000,
0x0000,
0x0000,
0x60bc,
0xdaa7,
0x61bc,
0x31c0,
0x31c0,
0x21c9,
0x20c9,
0x20c9,
0x21c9,
0x4fed,
0xd7f1,
0x4f06,
0x4e7c,
0x5e7c,
0xd378,
0xb6cf,
0x82d0,
0x827f,
FC Node
0x0650,
0x0650,
0x0650,
0x0650,
0x0650,
0x0010,
0x0010,
0x0010,
0x0010,
Name
0x8204,
0x8104,
0x8204,
0x8204,
0x8204,
0x0000,
0x0000,
0x0000,
0x0000,
0x60bc,
0xdaa7,
0x61bc,
0x31c0,
0x31c0,
0x21c9,
0x20c9,
0x20c9,
0x21c9,
0x4fed
0xd7f1
0x4f06
0x4e7c
0x5e7c
0xd378
0xb6cf
0x82d0
0x827f
},
},
},
},
},
},
},
},
},
V5.x notes
The V5.x OS uses a new device naming scheme that is based on the
unique WWID of a device, and not its physical location
(bus-target-LUN). A device that is moved, or removed and re-added,
keeps its original device name.
In V5.x, it is not necessary to configure Fibre Channel persistent
binding, since the device naming scheme is not dependent on device
location and 255 valid SCSI targets are available on each bus.
emxmgr
The emxmgr utility also displays the link status, topology, and
N_Port detail of each adapter:
emxmgr -t emx<instance#>
61
hwmgr
The command hwmgr -show fibre (in Tru64 UNIX V5.1B and later)
displays Fibre Channel adapter information that is similar to the
emxmgr commands.
The command hwmgr -view devices displays all the devices on the
host system.
To view detailed information about the configured devices, use the
command hwmgr -show scsi -full.
Example:
SCSI
DEVICE DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME TYPE
SUBTYPE OWNER PATH FILE
VALID PATH
------------------------------------------------------------------------673: 18
losaz205 disk
none
0
3
dsk281 [4/0/3]
WWID:01000010:6006-0480-0000-0000-3220-5359-4d30-4632
BUS
TARGET LUN
PATH STATE
-----------------------------4
0
3
valid
6
0
3
valid
6
1
3
valid
62
Fabric addressing
63
Parameter
Bit
Description
Default
Disk Array
Enabled
Volume Set
Enabled
Use Hard
Addressing
Enabled
Hard Addressing NP
Non-participating
If enabled and the H-bit is set, the director uses only the hard
address. If it cannot get this address, it re-initializes and changes its
state to non-participating. If the NP bit is not set, the director accepts
the soft-assigned address.
Disabled
Loop ID
---
Valid only if the H-bit is set, is a 1-byte address (00 through 7D).
00
Third-party
Logout across
the Port
TP
Disabled
Fabric addressing
Each port on a device attached to a fabric is assigned a unique 64-bit
identifier called a Worldwide Port Name (WWPN). These names are
factory-set on the HBAs in the hosts, and are generated on the Fibre
Channel directors in the Symmetrix.
Note: For comparison to Ethernet terminology, an HBA is analogous to a NIC
card, and a WWPN to a MAC address.
64
Note: The ANSI standard also defines a WWNN, but this name has been
inconsistently defined by the industry.
All three methods use the first two bytes (0 and 1) of the eight-byte
LUN addressing structure. The remaining six bytes are set to 0s.
65
For logical unit and volume set addressing, the Symmetrix port
identifies itself as an array controller in response to a hosts Inquiry
command sent to LUN 00. This identification is done by returning the
byte 0x0C in the Peripheral Device Type field of the returned data for
Inquiry. If the Symmetrix returns the byte 0x00 in the first byte of the
returned data for Inquiry, the Symmetrix is identified as a direct
access device.
Upon identifying the Symmetrix as an array controller device, the
host should issue a SCSI-3 Report LUNS command (0xA0), in order
to discover the LUNS.
The three addressing modes, contrasted in Table 5, differ in the
addressing scheme (target ID, LUN, and virtual bus) and number of
addressable devices.Table 5
Addressing
mode
Codea
A
Bit
V
Bit
Response to
Inquiry
LUN
discovery
method
Possible
addresses
Maximum
logical
devicesb
Peripheral
Device
00
0x00
Direct Access
Access LUNs
directly
16,384
256
Logical Unit
10
0x0C
Array Controller
Host issues
Report LUNS
command
2,048
128
Volume Set
01
0x0C
Array Controller
Host issues
Report LUNS
command
16,384
512
66
4
Tru64 UNIX and
Symmetrix over SCSI
67
Symmetrix configuration
In the Tru64 UNIX host environment, you can configure the
Symmetrix disk devices into logical volumes.
The EMC Customer Engineer should contact the EMC Configuration
Specialist for updated online information. This information is
necessary to configure the Symmetrix system to support the
customers host environment.
Table 5 shows SCSI device support.
Table 5
8 (0-7)
16 (0-F)
8 (0-7)
8 (0-7)
a. Most SCSI HBAs have a default target ID of 7. Unless the default target ID of the adapter is
different or changed, do not use target ID 7.
68
Host configuration
This section describes the tasks required to install one or more
Compaq host bus adapters into the AlphaServer host and configure
the Tru64 UNIX environment.
Firmware
To display the host IDs of the SCSI HBAs on the system, use one of
the following commands at the AlphaServer console prompt:
>>> show pk*
or
>>> show isp*
Host configuration
69
70
ReadyTimeSeconds
= 45
InquiryLength
= 160
RequestSenseLength
= 160
PwrMgmt_Capable
= false
Host configuration
71
Device management
This section describes how to add and manage devices online.
Version
Command or utility
Description
4.0x
cd /dev
Creates device files for new devices, first calculate the new
device name based on bus-target-lun location. For example, if a
device on bus 3 target 3 lun 3 was newly added, create rzc27
device file.
./MAKEDEV <deviceName>
where <deviceName> is the filename of the
device.
scsimgr
72
Table 6
Version
Command or utility
Description
5.x
dsfmgr
Example
SCSI
DEVICE
DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME
TYPE
SUBTYPE OWNER PATH FILE
VALID PATH
------------------------------------------------------------------------1664: 138
losaz215
disk
none
2
2
dsk1366 [3/1/0]
WWID:04100024:"EMC
SYMMETRIX
600025068000"
BUS
TARGET LUN
PATH STATE
-----------------------------3
1
0
valid
4
1
0
valid
Device management
73
74
5
TruCluster Servers
TruCluster Servers
76
79
83
87
75
TruCluster Servers
Available Server
An Available Server Environment (ASE) is a cluster of up to four
member systems with Available Server software and shared bus
connections to common external storage devices. An ASE cluster
provides high availability by relocating access to the shared devices
and restarting applications in the event of a failure on a member
system.
Production Server
A Production Server cluster is made up of two to eight member
systems connected by high-speed Memory Channel hardware
interconnects. In addition to Available Server functionality, a
Production Server cluster also provides Distributed Raw Disk (DRD)
services and a Distributed Lock Manager (DLM) to support
concurrent access to shared storage devices for applications such as
Oracle Parallel Server.
Disk services
DRD services
User-defined services
NFS services
Tape services
76
TruCluster Servers
asemgr
The asemgr utility is used to configure and manage the member
systems and services of a TruCluster. If no command line options are
specified, asemgr defaults to an interactive menu interface. The
following are some commonly used asemgr command line options:
To start a service:
asemgr -s <service_name> <member_name>
To relocate a service:
asemgr -m <service_name> <member_name>
To stop a service:
asemgr -x <service_name>
77
TruCluster Servers
78
TruCluster Servers
Symmetrix connectivity
Although direct single-initiator SCSI bus connections are usually
recommended for highly available Symmetrix shared device
configurations, TruCluster V1.6 requires the use of multi-initiator
shared buses to connect multiple member systems to shared devices.
The TruCluster software monitors the status of member systems by
using the HBA connections between members for a host-to-host SCSI
ping.
Shared SCSI buses
with Y-cables
The shared SCSI bus must be properly terminated, with only two
points of termination. The internal termination on Compaq SCSI
HBAs can be disabled by removing the small yellow resistor
packs on the adapter board. Symmetrix SCSI ports have small
toggle switches to enable/disable termination.
The total length of the shared SCSI bus from start to end
(termination to termination) must be less than 25 meters.
Each HBA on the shared SCSI bus must use a different SCSI ID.
The HBA SCSI IDs should not conflict with the target IDs of
Symmetrix devices on the shared bus. The SCSI IDs of HBAs can
be checked by running show pk* or show isp* at the AlphaServer
console prompt and changed with a set <pk*0_host_id> <value>
command.
TruCluster V1.6 with Symmetrix
79
TruCluster Servers
Figure 4
80
TruCluster Servers
Symmetrix configuration
Refer to the EMC Support Matrix for correct Symmetrix director port
settings. Tru64 UNIX V4.0x can support 8 target IDs (0-7) and 8 LUNs
(0-7). Symmetrix Fibre Channel LUN assignments above 07 are not
usable by Tru64 V4.0x systems. On Symmetrix SCSI director ports,
avoid using target IDs that would conflict with the SCSI IDs of
connected HBAs.
81
TruCluster Servers
Additional documentation
TruCluster Version 1.6 release notes, hardware configuration,
software installation and administration manuals are available online
at:
http://h30097.www3.hp.com/docs/cluster_doc/cluster_16/TCR16_DOC.HTM
82
TruCluster Servers
Connection manager
The connection manager is a kernel component that manages the
formation and operation of a cluster. It monitors cluster member
communication, calculates cluster quorum votes, controls
membership in the cluster, and maintains cluster integrity when
members join or leave.
The clu_quorum command is used to display or configure cluster
quorum disks and votes. The clu_get_info command displays
information about the cluster and its members.
clu_host1# clu_get_info -full
Cluster information for cluster truclu
Number of members configured in this cluster = 2
memberid for this member = 1
Cluster incarnation = 0x90880
Cluster expected votes = 3
Current votes = 3
Votes required for quorum = 2
Quorum disk = dsk258h
Quorum disk votes = 1
Information on each cluster member
Cluster memberid = 1
Hostname = clu_host1
Cluster interconnect IP name = clu_host1-ics0
Cluster interconnect IP address = 10.0.0.1
Member state = UP
Member base O/S version = Compaq Tru64 UNIX V5.1A
(Rev. 1885)
Member cluster version = TruCluster Server V5.1A
(Rev. 1312)
Member running version = INSTALLED
Member name = clu_host1
Member votes = 1
csid = 0x10002
83
TruCluster Servers
Cluster memberid = 2
Hostname = clu_host2
Cluster interconnect IP name = clu_host2-ics0
Cluster interconnect IP address = 10.0.0.2
Member state = UP
Member base O/S version = Compaq Tru64 UNIX V5.1A
(Rev. 1885)
Member cluster version = TruCluster Server V5.1A
(Rev. 1312)
Member running version = INSTALLED
Member name = clu_host2
Member votes = 1
csid = 0x20001
84
TruCluster Servers
85
TruCluster Servers
86
TruCluster Servers
Symmetrix connectivity
TruCluster V5.x can support multipath configurations to Symmetrix
Fibre Channel and SCSI devices (Figure 5 on page 88). A minimum of
two device paths per cluster member is recommended for basic load
balancing and path failover. Connecting shared Symmetrix devices to
multiple cluster members provides continued availability in case of
member failure. Additional device paths can be configured for higher
availability.
87
TruCluster Servers
TruCluster V5.x
Server
TruCluster V5.x
Server
Member 1
Member 2
Cluster Interconnect
Fibre Channel
Switch
Fibre Channel
Switch
TruCluster V5.x
System Disk
Requirements
Symmetrix System
Figure 5
SYM-000210dd
88
TruCluster Servers
Tru64 system disk This is the source disk from which files and
directories are copied to create the initial cluster member and file
systems. This source disk does not need to be shared and does not
need to be a Symmetrix device. After the initial TruCluster
member is built, the source system disk does not have any
cluster-related function except as a potential emergency boot disk
for resolving cluster problems.
Member boot disks Each cluster member must have its own
dedicated boot disk. A two member cluster would require two
Symmetrix logical devices. The devices assigned as member boot
disks should be shared and connected to all cluster members.
Symmetrix configuration
Refer to the EMC Support Matrix for correct Symmetrix director port
settings.
Tru64 UNIX V5.x can support 16 SCSI device target IDs (0-F). Do not
use the same ID that is used by the HBA (usually SCSI ID 7). Eight
LUNs (0-7) are supported per SCSI target ID.
Fibre Channel support is up to 255 LUNs per Fibre Channel target.
Each Symmetrix Fibre director port is a single Fibre Channel target.
89
TruCluster Servers
Persistent reservations
90
TruCluster Servers
Procedure A
SCSIDEVICE
#
# Entry for Symmetrix SCSI devices
#
Type = disk
Name = "EMC" "SYMMETRIX"
PARAMETERS:
TypeSubClass
= hard_disk, raid
BlockSize
= 512
BadBlockRecovery
= disabled
DynamicGeometry
= true
LongTimeoutRetry
= enabled
DisperseQueue
= false
TagQueueDepth
= 20
ReadyTimeSeconds
= 45
InquiryLength
= 160
RequestSenseLength
= 160
PwrMgmt_Capable
= false
ATTRIBUTE:
# ubyte[0] = 8 Disable AWRE/ARRE only, PR enabled
# ubyte[0] = 25 Disable PR & AWRE/ARRE, Enable I/O Barrier Patch
AttributeName = "DSBLflags"
Length
= 4
ubyte[0]
= 8
SCSIDEVICE
#
# Entry for Symmetrix Fibre Channel devices
#
Type = disk
TruCluster V5.x with Symmetrix
91
TruCluster Servers
Stype = 2
Name = "EMC" "SYMMETRIX"
PARAMETERS:
TypeSubClass
= hard_disk, raid
BlockSize
= 512
BadBlockRecovery
= disabled
DynamicGeometry
= true
LongTimeoutRetry
= enabled
DisperseQueue
= false
TagQueueDepth
= 20
ReadyTimeSeconds
= 45
InquiryLength
= 160
RequestSenseLength
= 160
PwrMgmt_Capable
= false
ATTRIBUTE:
AttributeName = "DSBLflags"
Length
= 4
ubyte[0]
= 8
SCSIDEVICE
#
# Entry for Symmetrix SCSI devices
#
Type = disk
92
TruCluster Servers
93
TruCluster Servers
Verifying a Symmetrix
entry
Persistent reservations
TruCluster V5.x will establish SCSI-3 persistent reservations on all
visible disk devices in the cluster. These device reservations prevent
write access by other hosts. If devices in a TruCluster configuration
are moved or reassigned to a host that is not a member of the
TruCluster, the cluster's persistent reservations must be cleared
before the devices can be written to by the non-cluster host. The
/usr/sbin/cleanPR script can be used to clear persistent reservations
from devices previously reserved by TruCluster. Modify the cleanPR
script to include EMC devices, or contact your EMC representative or
customer support to obtain a special script for clearing persistent
reservations from EMC devices.
94
TruCluster Servers
Additional documentation
TruCluster Version 5.x technical overview, release notes, hardware
configuration, cluster installation, and cluster administration
manuals are available online at:
http://www.tru64unix.compaq.com/docs/pub_page/cluster_list.html
95
TruCluster Servers
96
PART 2
VNX Series and CLARiiON
Connectivity
Part 2 includes:
100
102
115
122
128
99
Host connectivity
Refer to the EMC Support Matrixor contact your EMC representative
for the latest information on qualified hosts, HBAs, and connectivity
equipment.
Logical devices
VNX series and CLARiiON support for Tru64 UNIX requires EMC
AccessLogix. Hosts can only be connected to one storage group per
VNX series and CLARiiON system. The maximum number of LUNs
per Storage Group is 256, but Tru64 UNIX V5.x only supports up to
255 LUNs per Fibre Channel target ID, so 255 is the maximum
number of VNX series and CLARiiON systems that can be supported
by each Tru64 UNIX host connected to an array.
The logical devices presented by VNX series and CLARiiON are the
same on each storage processor (SP). A logical unit (LU) reports itself
Device Ready on one SP and Device Not Ready on the other SP.
100
101
102
Use the WWIDMGR utility to verify that the HBAs are configured
for the fabric topology. You may have to perform an init from the
console prior to running WWIDMGR.
If the adapters are not set for fabric, set them with this command:
wwidmgr -set adapter -item 9999 -topo fabric
Note: This command sets all adapters to fabric topology.
103
#
TypeSubClass = hard_disk, raid
BlockSize = 512
BadBlockRecovery = disabled
DynamicGeometry = true
DisperseQueue = false
TagQueueDepth = 32
PwrMgmt_Capable = false
LongTimeoutRetry = enabled
ReadyTimeSeconds = 90
InquiryLength = 120
RequestSenseLength = 120
#
ATTRIBUTE:
#
# Disable PR/AWRE/ARRE and enable I/O Barrier
#
AttributeName = "DSBLflags"
Length = 4
ubyte[0] = 25
#
ATTRIBUTE:
#
# Disable Start Unit with every VPD Inquiry
#
AttributeName = "VPDinfo"
Length = 16
ubyte[6] = 4
#
ATTRIBUTE:
#
# Report UDID value in hwmgr view devices
#
AttributeName = "rpt_chgbl_dev_ident"
Length = 4
4. Reboot the operating system. (This can be done at any time prior
to establishing a connection with the first LUN.)
5. After rebooting the system, issue the following command to
verify that the ddr.db file has been updated:
/sbin/ddr_config -s disk DGC '' '' 2
104
Note: The product name and version have been left blank, because the
/etc/ddr.dbase entry covers all devices of vendor name DGC. The 2 at
the end of the command refers to the type of 2 for a Fibre Channel device.
105
Disconnects
= enabled
TaggedQueuing
= enabled
CmdReordering
= enabled
LongTimeoutRetry
= enabled
DisperseQueue
= false
WideTransfers
= enabled
WCE_Capable
= true
PwrMgmt_Capable
= false
Additional_Flags
= 0x0
TagQueueDepth
= 0x20
ReadyTimeSeconds
= 0x5a
CMD_PreventAllow
= notsupported
CMD_ExtReserveRelease
= notsupported
CMD_WriteVerify
= notsupported
Additional_Cmds
= 0x0
InquiryLength
= 0x78
RequestSenseLength
= 0x78
ATTRIBUTE:
AttributeName = rpt_chgbl_dev_ident
Length
= 4
Ubyte[
0]
= 0x00
Ubyte[
1]
= 0x00
Ubyte[
2]
= 0x00
Ubyte[
3]
= 0x00
ATTRIBUTE:
AttributeName = VPDinfo
Length
= 16
Ubyte[
0]
= 0x00
Ubyte[
1]
= 0x00
Ubyte[
2]
= 0x00
Ubyte[
3]
= 0x00
Ubyte[
4]
= 0x00
Ubyte[
5]
= 0x00
Ubyte[
6]
= 0x04
Ubyte[
7]
= 0x00
Ubyte[
8]
= 0x00
Ubyte[
9]
= 0x00
Ubyte[
10]
= 0x00
Ubyte[
11]
= 0x00
Ubyte[
12]
= 0x00
Ubyte[
13]
= 0x00
Ubyte[
14]
= 0x00
Ubyte[
15]
= 0x00
ATTRIBUTE:
AttributeName = DSBLflags
Length
= 4
Ubyte[
0]
= 0x19
106
Ubyte[
Ubyte[
Ubyte[
1]
2]
3]
= 0x00
= 0x00
= 0x00
Note: You can safely ignore the warning message, which indicates that the
product name is blank, so that all devices with a vendor name of DGC match
this entry.
Follow the instructions that are included with this kit. Part of the
process is to rebuild the kernel.
What next?
If Fibre Channel HBA support is not built into the kernel as part of
the patch process, follow the steps under Rebuilding the Tru64
UNIX kernel on page 107.
107
108
Figure 6
109
Figure 7
If the zones are activated and your host system has been booted, you
should see the initiator WWNs of your Tru64 UNIX HBAs with a
Fibre Logged In status of Yes. You may have to wait for
Unisphere/Navisphere Manager to update its initiator table from the
storage system, if your system has booted and the zones have been
established, but no Fibre Logged In entries are appearing as Yes.
If, after 60 seconds, the storage system still does not show a Yes in the
Fibre Logged In column, examine the zoning configuration and
status using the fabric switch configuration tool supplied by the
switch manufacturer.
Registering the connection
When you have an established a connection between the SP and the
HBA, you must register the connection. The registration process
assigns a specific initiator type to the connection between the SP Port
and the HBA; this initiator type determines how the SP responds to
SCSI commands received on the port.
To register the connection:
1. Start the Register Initiator window by clicking the connection to
be registered.
The Register button darkens.
2. Click Register.
110
Figure 8
111
Figure 9
112
Figure 10
113
Figure 11
114
115
Preparatory steps
The steps of this installation process are described in earlier sections
of this document:
1. Install the HBA Follow the procedure Installing the HBA on
page 102.
2. Set the UDID Follow the procedure Setting the UDID on
page 108.
Note: It is recommended to set the Base UUID to something other than
zero. You must set a different Base UUID on each VNX series and
CLARiiON storage system to ensure that UUID values do not conflict.
Duplicate UUIDs can cause problems on the Tru64 UNIX host.
116
IMPORTANT
Ensure that the default SP of the boot LUN is the same SP used in
the initial single path configured earlier.
Add the newly created boot LUN and Tru64 UNIX host into a Storage
Group. LUN 0 must be assigned and present in the storage group.
117
Figure 12
5. When the INIT completes, verify that the LUN is visible with the
SHOW DEVICE command, which lists the LUN among the disk
devices.
118
Completing zoning
Additional zones and paths to both SPs can now be configured after
the /etc/ddr.dbase entry for VNX series and CLARiiON has been
added.
119
Setting BOOTDEF_DEV
Automated booting requires providing the correct contents for the
SRM console BOOTDEF_DEV environment variable. In the case of a
boot device that is accessible through multiple paths, it is crucial that
all paths are included for the BOOTDEF_DEV variable. This ensures
that the system can boot, as long as one path is valid.
Use the SHOW DEVICE console command to display the accessible
devices and identify the listed devices that refer to the boot device. Be
aware that there may be more than one access path for the device. For
example, if the boot device has a UDID of 788 and is on the second
KGPSA, it should be listed as $1$DGB788.
Each instance of $1$DGB788 listed in the display represents a path to
the boot device. You may see device lines, such as the following:
dgb788.1001.0.4.6
dgc788.1001.0.8.6
$1$DGB788
$1$DGB788
RAID 10
RAID 10
0845
0845
In this case, both entries represent a path to the boot device and must
be included in the definition of the BOOTDEF_DEV variable. The
proper set command is:
set bootdef_dev "dgb788.1001.0.4.6,dgc788.1001.0.8.6"
120
With this definition, the SRM console first attempts to boot using
dgb788.1001.0.4.6. If that times out, the console attempts
dgc788.1001.0.8.6. Include all possible paths in the definition to
ensure that the boot succeeds.
121
ReadyTimeSeconds = 90
InquiryLength = 120
RequestSenseLength = 120
#
ATTRIBUTE:
#
# Disable AWRE/ARRE and enable PR
#
AttributeName = "DSBLflags"
Length = 4
ubyte[0] = 8
#
ATTRIBUTE:
#
# Disable Start Unit with every VPD Inquiry
#
AttributeName = "VPDinfo"
Length = 16
ubyte[6] = 4
#
ATTRIBUTE:
#
# Report UDID value in hwmgr -view devices
#
AttributeName = "rpt_chgbl_dev_ident"
Length = 4
123
Disconnects
= enabled
TaggedQueuing
= enabled
CmdReordering
= enabled
LongTimeoutRetry
= enabled
DisperseQueue
= false
WideTransfers
= enabled
WCE_Capable
= true
PwrMgmt_Capable
= false
Additional_Flags
= 0x0
TagQueueDepth
= 0x20
ReadyTimeSeconds
= 0x5a
CMD_PreventAllow
= notsupported
CMD_ExtReserveRelease = notsupported
CMD_WriteVerify
= notsupported
Additional_Cmds
= 0x0
InquiryLength
= 0x78
RequestSenseLength
= 0x78
ATTRIBUTE:
AttributeName = rpt_chgbl_dev_ident
Length
= 4
Ubyte[ 0] = 0x00
Ubyte[ 1] = 0x00
Ubyte[ 2] = 0x00
Ubyte[ 3] = 0x00
ATTRIBUTE:
AttributeName = VPDinfo
Length
= 16
Ubyte[ 0] = 0x00
Ubyte[ 1] = 0x00
Ubyte[ 2] = 0x00
Ubyte[ 3] = 0x00
Ubyte[ 4] = 0x00
Ubyte[ 5] = 0x00
Ubyte[ 6] = 0x04
Ubyte[ 7] = 0x00
Ubyte[ 8] = 0x00
Ubyte[ 9] = 0x00
Ubyte[ 10] = 0x00
Ubyte[ 11] = 0x00
Ubyte[ 12] = 0x00
Ubyte[ 13] = 0x00
Ubyte[ 14] = 0x00
Ubyte[ 15] = 0x00
ATTRIBUTE:
AttributeName = DSBLflags
Length
= 4
Ubyte[ 0] = 0x08
125
Ubyte[
Ubyte[
Ubyte[
1]
2]
3]
= 0x00
= 0x00
= 0x00
Note: The product name and version have been left blank, as the
/etc/ddr.dbase entry covers all devices of vendor name DGC. Refer to
Creating an entry in ddr.dbase on page 103 for an example of the
output from this command. The DSBLflags entries must show 08 for
persistent reservations enabled, rather than 19.
126
3. After booting the initial TruCluster member, add the VNX series
and CLARiiON entry to both /etc/ddr.dbase and
/etc/.proto.ddr.dbase on the TruCluster member.
4. Run clu_add_member to add a new TruCluster member.
5. Before booting the new TruCluster member, ensure that only one
VNX series and CLARiiON SP is visible to it.
6. Boot and initialize the new TruCluster member, then add or verify
the VNX series and CLARiiON entry in the /etc/ddr.dbase file of
the new member.
7. Additional zones and paths to both SPs can be configured after
the VNX series and CLARiiON entry has been added to the new
member.
127
HBA management
The emxmgr utility can be used to display and manage the Fibre
Channel adapters on the host system.
To list all Fibre Channel HBAs, use this command:
emxmgr -d
Device naming
Tru64 V5.x uses dsk device names (departing from the traditional rz
device names used in previous versions of the operating system). The
device special files are stored in different directories, so the full
format of the device name depends on the device type:
128
Adding devices
After LUNs have been added to the storage group, it is time to add
the devices to your Tru64 host. There are two methods to add devices
to the Tru64 system:
The scan command instructs the system to rescan the SCSI buses for
devices, so that the new devices can be found. This command
executes asynchronously and can take up to several minutes to
complete depending on your CPU and system configuration. You
come back to the prompt before the command is completed.
Delay the show command until the scan command has had time to
complete. If after a show you do not see all the devices on the system
that you expect, wait and repeat the show until the devices are
visible.
The hwmgr -show scsi command displays all SCSI devices visible to
the system. The output is similar to the following:
SCSI
HOSTNAME DEVICE DEVICE DRIVER NUM
DEVICE
FIRST
HWID: DEVICEID
TYPE
SUBTYPE OWNER PATH
FILE
VALID PATH
-----------------------------------------------------------------------271: 144
l82ba209 disk
none
0
2
dsk203
[1/0/0]
273: 146
l82ba209 disk
none
0
1
(null)
275: 3
l82ba209 disk
none
0
2
dsk207
[1/0/1]
276: 9
l82ba209 disk
none
0
2
dsk208
[1/0/26]
65: 0
l82ba209 disk
none
0
1
dsk0
[0/0/0]
66: 1
l82ba209 disk
none
0
1
dsk1
[0/1/0]
67: 2
l82ba209 disk
none
0
1
dsk2
[0/2/0]
69: 4
l82ba209 cdrom
none
0
1
cdrom0
[0/5/0]
129: 64
l82ba209 disk
none
2
2
dsk62
[1/0/17]
129
130: 65
131: 66
132: 67
l82ba209
l82ba209
l82ba209
disk
disk
disk
none
none
none
0
0
0
1
2
2
(null)
dsk64
dsk65
[1/0/2]
[1/0/3]
There is no direct correlation between the device file name and the
logical unit number, as there was in previous releases of the Tru64
operating system. Also, a number of paths in the Num Path column
greater than one indicates the existence of multiple paths for all
devices shown on SCSI bus 1.
Multipath configurations
If the storage system has been configured for high availability,
multiple paths are presented to Tru64 for the same LUN. In these
configurations Tru64 can make use of its multipathing ability to
improve both reliability and performance.
Tru64 UNIX recognizes multiple paths to VNX series and CLARiiON
storage systems automatically as it sees identical WWIDs for the
multiple paths to the same LUN. The primary indication for this is
the number of paths listed in the hwmgr -show scsi output, as shown
earlier. Additional detail is provided in the hwmgr -show scsi -full
output.
For example, this command generates the listing that follows it:
hwmgr -show scsi -did 144 -full
130
SCSI
DEVICE DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME TYPE
SUBTYPE OWNER PATH FILE
VALID PATH
------------------------------------------------------------------------271: 448
182ba209 disk
none
0
2
dsk203 [1/0/0]
WWID:01000010:6006-0170-845f-0000-e6a7-6303-2d10-d611
BUS
TARGET LUN
PATH STATE
-----------------------------1
0
0
valid
4
0
0
valid
LUN expansion
Unisphere/Navisphere offers two methods for expanding VNX
series and CLARiiON LUN capacity: RAID group and metaLUN
expansion. The AdvFS and UFS file systems on Tru64 UNIX can
support expanded LUNs. AdvFS file systems can be extended on
hosts with Tru64 UNIX V5.1B or later installed. UFS file systems can
be extended on hosts with Tru64 UNIX V5.1 or later installed.
The disk label of an expanded LUN must be updated before the new
capacity can be used by file systems. Disk partition sizes can be
increased to the new capacity, but the disk offsets of in-use disk
partitions must not be changed. The disk label updates should only
be done by experienced system administrators. Partitioning and
sizing errors in disk label updates can cause data loss. A data backup
is recommended before expanding a LUN.
The steps for file system LUN expansion are:
1. Back up data on the LUN to be expanded.
2. Save a copy of the existing disk label:
disklabel -r <dsk_name> > disklabel.orig.out
131
5. Rewrite or edit the existing disk label to reflect the new LUN
capacity. Increase the size of the disk partition containing the file
system to be extended. Do not change the offsets of any disk
partitions that are used or open:
disklabel -w <dsk_name>
disklabel -re <dsk_name>
132
PART 3
Appendix
Part 3 includes:
A
Methods of Data
Migration
136
138
148
170
135
Device naming
Tru64 UNIX V5 configures device database entries and device names
based on the identifier (WWID) reported by disk devices. New dsk
device names and device special files are created only for disk
devices that report new and unique identifiers. The dsk device
names are independent of bus-target-LUN address and location. A
LUN that reports the same identifier as an existing device in the
device database is configured as an additional path to the existing
device. Since all Symmetrix and VNX series and CLARiiON systems
will report unique identifiers, a new dsk device name will be created
if an existing Symmetrix or VNX series and CLARiiON device is
removed and replaced by a new device at the same bus-target-LUN
address. System administrators can use the dsfmgr command to
manage device special files and rename devices. The hwmgr
commands can be used to view devices, dsk names, and WWID
information.
Disk labels
The label of a disk device contains information such as the type,
geometry, and partition table of the disk. The disk label is usually
located on block 0 (zero) of the disk. The disklabel command can be
used to view or configure the partitions and boot block of a disk
device. The fstype values in the partition table of a disk label indicate
whether a disk partition is available for use or marked in use for
Logical Storage Manager (LSM), a file system, or swap space:
136
Devices that are not being used for file systems or LSM volumes
will usually be unlabeled or have unused in the fstype field of all
partitions.
Device partitions used for UFS will have 4.2BSD in the fstype
field.
137
138
Migration procedure
and example
AdvFS domains can have more than one fileset. When using
vdump or vrestore, each fileset in the same domain must be
migrated individually.
Use a new disk device (or LSM volume) to create a new target
UFS file system or AdvFS domain and fileset. Optionally, if the
original source AdvFS domain has more than one fileset, create a
fileset in the new target domain for each original fileset to be
migrated. For example:
disklabel -r <new_device>
disklabel -z <new_device>
disklabel -rw <new_device> default
showfsets <source_domain_name> (AdvFS only)
mkfdmn <new_device> <new_domain_name> (AdvFS only)
mkfset <new_domain_name> <source_fileset_name> (AdvFS
only)
newfs <new_device> (UFS only)
139
If the source and new file systems both use equivalent disk
partitions, an alternative is to swap the dsk device name files
of the old and new disk devices. The /etc/fstab file and
/etc/fdmns domain subdirectory device links do not need to
be updated if the new file system devices will reuse the
original dsk device name files.
For example:
dsfmgr -e <new_device> <old_device>
140
141
Migration procedure
and example
2. Verify that all filesets in the AdvFS domain are mounted. View
the filesets in the domain with the showfsets command, and
mount any filesets that have not already been mounted:
showfsets <domain_name>
3. Configure new disk devices (or LSM volumes) that are large
enough to replace the old devices. Label each new disk device. If
migrating to a LSM volume, create the LSM volume with the new
devices. For example:
disklabel -r <new_device>
disklabel -z <new_device>
disklabel -rw <new_device> default
4. Add the new device (or LSM volume) to the AdvFS domain. If
necessary, more than one new device can be added to the domain.
For example:
addvol <new_device> <domain_name>
142
showfdmn data02_dmn
showfsets data02_dmn
disklabel -z dsk202
disklabel -rw dsk202 default
addvol /dev/disk/dsk202c data02_dmn
rmvol /dev/disk/dsk102c data02_dmn
showfdmn data02_dmn
umount /fs02_mnt
/sbin/advfs/verify data02_dmn
mount -t advfs data02_dmn#fs02 /fs02_mnt
143
Migration procedure
and example
The LSM volume remains active and usable during the mirroring
and synchronization process.
2. Select new disk devices or disk partitions that are large enough to
replace the old storage devices. Label each disk device. For
example:
disklabel -r <new_device>
disklabel -z <new_device>
disklabel -rw <new_device> default
3. Initialize the new disk devices or disk partitions for LSM. Add the
new disk devices or disk partitions to the disk group of the LSM
volume to be migrated. For example:
voldisksetup -i <new_device>
voldg -g <disk_group_name> adddisk <new_device>
5. When the LSM volume mirror plexes are fully synchronized, the
old storage device plex can be disassociated by typing the
following:
volprint -htg <disk_group_name> <volume_name>
volplex -g <disk_group_name> dis <old_plex_name>
144
volprint -ht
disklabel -z dsk203
disklabel -rw dsk203 default
voldisksetup -i dsk203
voldg -g data03_dg adddisk dsk203
volassist -g data03_dg mirror data03_vol dsk203 &
volprint -htg data03_dg data03_vol
volplex -g data03_dg dis data03_vol-01
voledit -g data03_dg -r rm data03_vol-01
voldisk rm dsk103
145
The dsk device names of the new devices will be different than
the original device names. Reassigning the original device name
files to the new devices is a simple way to transition over to the
new devices with minimal reconfiguration. Type the following:
dsfmgr -e <new_dsk_name> <original_dsk_name>
If the original disk devices are LSM devices, the LSM disk group
that has been copied must be restored with saved LSM
configuration information. The disk group cannot be simply
imported because LSM will reject cloned devices. To prevent
problems that may occur if the original LSM devices and cloned
devices are both accessible at the same time, the original devices
should be disconnected before the new device copies are made
available to the host.
146
Migration procedure
for LSM devices
3. Stop activity, unmount any file systems, and deport the LSM disk
group:
voldg deport <disk_group_name>
7. Restore the LSM disk group and enable the volumes. For
example, type:
voldisk rm <original_dsk_names>
volrestore -f -g <disk_group_name> -d <lsm_save_dir>
volume start <volume_names>
147
148
Swap space
A host may use one or more devices as swap space. Swap devices
can be configured with the swapon command and defined in the
/etc/sysconfigtab file.
In order to boot and operate properly from new system and boot
devices, the following configuration files and references may need to
be updated:
/etc/fstab file
The /etc/fstab entries specify the file systems that will be
mounted when the host is booted.
/etc/fdmns directory
If AdvFS file systems are used, the /etc/fdmns subdirectories
will contain device links for root domain, usr domain, and var
domain.
/etc/sysconfigtab file
The swapdevice attribute specifies the swap devices.
You can migrate the system and boot device data of non-clustered
Tru64 UNIX V5 hosts using the methods described earlier in this
paper. However, note the following:
149
Migration procedure
and example
You can use the AdvFS addvol and rmvol procedure only for the
/usr and /var file system domains. You cannot use the addvol
command to add devices to root domain.
2. Migrate the root (/), /usr, and /var file systems to the new
devices.
3. Depending on the method used to migrate the system and boot
devices, the new root file system may need to be updated to
reference the new devices. If necessary, mount the new root file
system and modify the /etc/fstab file, /etc/sysconfigtab file, or
/etc/fdmns subdirectories on the new root file system.
4. Shut down the Tru64 host and disconnect/take offline the old
system and boot devices. Type:
shutdown -h now
5. Set the new boot device at the system console and boot from the
new device. If the boot fails, update the configuration files on the
new root file system by booting to single-user mode (boot fl s) or
by booting an alternate boot device from which the new root file
system can be temporarily mounted. For example:
wwidmgr -clear all
wwidmgr -show wwid | more
wwidmgr -quickset -udid <boot_device_udid>
init
show dev
set bootdef_dev <boot_device_console_name>
boot
150
Cluster quorum
Some clusters may use a cluster quorum disk. A quorum disk is
optional but recommended. The quorum disk device is
configured with the clu_quorum command and referenced in
each members /etc/sysconfigtab file. The quorum disk device
can be very small, so a Symmetrix gatekeeper device may be used
as the quorum disk. The quorum disk device should not be used
for any other data or purposes. The device must be shared and
connected to all cluster members.
151
In order to boot and operate properly from new system and boot
devices, the following configuration files and references may need to
be updated:
/etc/fstab file
The /etc/fstab entries specify the file systems that will be
mounted when the host is booted.
/etc/fdmns directory
The AdvFS domain subdirectories contain device links for the
cluster_root, cluster_usr, cluster_var, and member-specific root
domains.
152
You can use the clu_quorum command to set the quorum device
and quorum vote attributes in /etc/sysconfigtab. You can
determine the major and minor numbers of the device cnx
partitions (h partitions) using the file or ls l command.
You can migrate the TruCluster V5 system and boot device data using
the methods described earlier in this paper. However, note the
following:
You can use the AdvFS addvol and rmvol procedure only on the
clusterwide file system domains. You cannot use the addvol
command on the member boot device and member-specific root
domain. The quorum and member swap devices do not have
AdvFS file systems.
You can use the LSM mirroring procedure for the cluster_usr and
cluster_var domains. LSM can be used for the cluster_root
domain and member swap devices only with TruCluster V5.1A
and later. You cannot use LSM for the member boot and quorum
devices.
153
154
c. In the new target root domain fileset, edit the sysconfigtab file
and specify the new target device attribute values. For
example:
vi /target_member1_mnt/etc/sysconfigtab
swapdevice=/dev/disk/<new_member1_device_swap>
155
cluster_seqdisk_major=<new_member1_device_major>
cluster_seqdisk_minor=<new_member1_device_minor>
156
/target_root_mnt/etc/fdmns/
cluster_root old_cluster_root
new_cluster_root cluster_root
cluster_usr old_cluster_usr
new_cluster_usr cluster_usr
cluster_var old_cluster_var
new_cluster_var cluster_var
root1_domain old_root1_domain
root<new_member1_id>_domain root1_domain
root2_domain old_root2_domain
root<new_member2_id>_domain root2_domain
8. Shut down the cluster. All members must be shut down. Type the
command:
shutdown -c now
9. Set bootdef_dev to the new member boot device and boot each
cluster member. For example:
wwidmgr -clear all
wwidmgr -show wwid | more
wwidmgr -quickset -udid <boot_device_udid>
init
show dev
set bootdef_dev <boot_device_console_name>
boot
157
11. Verify that the new storage devices are referenced in all
TruCluster system and boot device configuration files and
attributes. Remove the old storage devices. Optionally, reboot to
ensure that each cluster member can boot independently from the
new system and boot devices without any problem.
In the following example, the TruCluster V5 system and boot devices
are migrated with the vdump/vrestore procedure.
Identify the original TruCluster system and boot device configuration
# clu_get_info
Cluster information for cluster truclu100
Number of members configured in this cluster = 2
memberid for this member = 1
Quorum disk = dsk1286h
Quorum disk votes = 1
Information on each cluster member
Cluster memberid = 1
Hostname = truclu101
Cluster interconnect IP name = truclu101-mc0
Member state = UP
Cluster memberid = 2
Hostname = truclu102
Cluster interconnect IP name = truclu102-mc0
Member state = UP
# ls /etc/fdmns/cluster*
/etc/fdmns/cluster_root:
dsk1307c
/etc/fdmns/cluster_usr:
dsk1308c
/etc/fdmns/cluster_var:
dsk1310c
# ls /etc/fdmns/root*
/etc/fdmns/root1_domain:
158
dsk1311a
/etc/fdmns/root2_domain:
dsk1312a
Get the major/minor numbers of the target device partitions
# file /dev/disk/dsk1547h
/dev/disk/dsk1547h:
block special (19/27618)
# ls -l /dev/disk/dsk1547h
brw------- 1 root
system 19,27618 May 30 08:49 /dev/disk/dsk1547h
# file /dev/disk/dsk1593h
/dev/disk/dsk1593h:
block special (19/28354)
# file /dev/disk/dsk1594h
/dev/disk/dsk1594h:
block special (19/28370)
Migrate member1 boot disk
# disklabel -z dsk1593
# disklabel -rw -t advfs dsk1593 default
# clu_bdmgr -c dsk1593 3
*** Error ***
Bad disk label.
Creating AdvFS domains:
Creating AdvFS domain 'root3_domain#root' on partition '/dev/disk/dsk1593a'.
# mkdir /target_member1_mnt
# mount root3_domain#root /target_member1_mnt
# vdump -0f - /cluster/members/member1/boot_partition | vrestore -xf - -D
/target_member1_mnt
path
: /cluster/members/member1/boot_partition
dev/fset : root1_domain#root
type
: advfs
advfs id : 0x31b6d762.000c6ac6.1
vdump: Date of last level 0 dump: the start of the epoch
vdump: Dumping directories
vdump: Dumping 49785118 bytes, 3 directories, 22 files
vdump: Dumping regular files
vrestore: Date of the vdump save-set: Sat Jun 8 07:29:55 1996
vrestore: Save-set source directory : /cluster/members/member1/boot_partition
vrestore: warning: vdump/vrestore of quotas not supported for non local
filesystems.
vdump:
vdump:
vdump:
vdump:
vdump:
159
160
161
# disklabel -r dsk1588
Disk is unlabeled or, /dev/rdisk/dsk1588c is not in block 0 of the disk
# disklabel -rw dsk1588 default
# mkfdmn /dev/disk/dsk1588c new_cluster_root
# mkfset new_cluster_root root
# mkdir /target_root_mnt
# mount new_cluster_root#root /target_root_mnt
# vdump -0f - / | vrestore -xf - -D /target_root_mnt
path
: /
dev/fset : cluster_root#root
type
: advfs
advfs id : 0x31b6d784.0007a953.1
vdump: Date of last level 0 dump: the start of the epoch
vdump: Dumping directories
vrestore: Date of the vdump save-set: Sat Jun 8 08:42:12 1996
vrestore: Save-set source directory : /
vdump: Dumping 154048065 bytes, 236 directories, 22525 files
vdump: Dumping regular files
vrestore: warning: vdump/vrestore of quotas not supported for non local
filesystems.
vdump:
vdump:
vdump:
vdump:
vdump:
162
cd
mv
mv
mv
mv
mv
mv
mv
mv
mv
mv
/target_root_mnt/etc/fdmns
cluster_root old_cluster_root
cluster_usr old_cluster_usr
cluster_var old_cluster_var
new_cluster_root cluster_root
new_cluster_usr cluster_usr
new_cluster_var cluster_var
root1_domain old_root1_domain
root2_domain old_root2_domain
root3_domain root1_domain
root4_domain root2_domain
# ls /target_root_mnt/etc/fdmns/cluster*
/target_root_mnt/etc/fdmns/cluster_root:
dsk1588c
/target_root_mnt/etc/fdmns/cluster_usr:
dsk1589c
/target_root_mnt/etc/fdmns/cluster_var:
dsk1590c
# ls /target_root_mnt/etc/fdmns/root*
/target_root_mnt/etc/fdmns/root1_domain:
dsk1593a
/target_root_mnt/etc/fdmns/root2_domain:
dsk1594a
(Configure a new quorum device after shutting down the cluster and rebooting from
the new system
and boot devices)
# shutdown c now
163
2. Stop aVNX series and CLARiiON I/O activity (or shut down the
entire cluster), and copy the TruCluster system and boot device
data to the new storage devices with TimeFinder or SRDF mirror
and split operations for arrays. ForVNX series and CLARiiON
systems, use MirrorView to establish the mirrors between arrays,
allowing the synchronization to complete prior to fracturing the
mirror and promoting the secondary image on the remote array.
3. After migrating the data and shutting down the cluster members,
disconnect or take offline the old system and boot devices.
4. Set the new boot device for the first cluster member. For example:
wwidmgr -clear all
wwidmgr -show wwid | more
wwidmgr -quickset -udid <boot_device_udid>
init
show dev
set bootdef_dev <boot_device_console_name>
6. Mount the cluster root file system with write permission, and
modify the /etc/fdmns subdirectory device links to reference the
new devices. (An alternative is to use dsfmgr to rename the new
devices.) Type:
mount -u /
cd /etc/fdmns
164
umount /mnt
mount root3_domain#root /mnt
vi /mnt/etc/sysconfigtab
umount /mnt
10. Boot the remaining cluster members from their new member boot
devices. For example:
wwidmgr -clear all
wwidmgr -show wwid | more
wwidmgr -quickset -udid <boot_device_udid>
init
show dev
set bootdef_dev <boot_device_console_name>
boot
165
(ev:none)
(ev:none)
(ev:none)
(ev:none)
6006-0480-0001-8460-0032-5359-4d30-3941
via adapter:
via fc nport:
connected:
dga2202.1001.0.7.9
pga0.0.0.7.9
5006-0482-c031-782d
dga2202.1002.0.7.9
pga0.0.0.7.9
5006-0482-c031-782e
Yes
Yes
166
Compaq Tru64 UNIX V5.1 (Rev. 732); Thu Jun 6 12:59:06 EDT 2001
.
INIT: SINGLE-USER MODE
Modify the /etc/fdmns subdirectory device links
# mount -u /
msfs_mount: The mount device does not match the linked device.
Check linked device in /etc/fdmns/domain
msfs_mount: Setting root device name to root_device RW
# cd /etc/fdmns
# ls clu*
cluster_root:
dsk1198c
cluster_usr:
dsk1199c
cluster_var:
dsk1200c
# ls root*
root1_domain:
dsk1203a
root2_domain:
dsk1204a
#
#
#
#
#
mv
mv
mv
mv
mv
#
#
#
#
#
#
#
#
#
#
mkdir
ln -s
mkdir
ln -s
mkdir
ln -s
mkdir
ln -s
mkdir
ln -s
cluster_root old_cluster_root
cluster_usr old_cluster_usr
cluster_var old_cluster_var
root1_domain old_root1_domain
root2_domain old_root2_domain
cluster_root
/dev/disk/dsk1588c
cluster_usr
/dev/disk/dsk1589c
cluster_var
/dev/disk/dsk1590c
root1_domain
/dev/disk/dsk1593a
root2_domain
/dev/disk/dsk1594a
/etc/fdmns/cluster_root
/etc/fdmns/cluster_usr
/etc/fdmns/cluster_var
/etc/fdmns/root1_domain
/etc/fdmns/root2_domain
# ls clu*
cluster_root:
dsk1588c
cluster_usr:
dsk1589c
cluster_var:
System and boot device migration
167
dsk1590c
# ls root*
root1_domain:
dsk1593a
root2_domain:
dsk1594a
Update the member /etc/sysconfigtab files
# bcheckrc
Checking device naming:
Passed.
dsfmgr: NOTE: updating kernel basenames for system at /
.
Mounting local filesystems
exec: /sbin/mount_advfs -F 0x14000 cluster_root#root /
cluster_root#root on / type advfs (rw)
exec: /sbin/mount_advfs -F 0x4000 cluster_usr#usr /usr
cluster_usr#usr on /usr type advfs (rw)
exec: /sbin/mount_advfs -F 0x4000 cluster_var#var /var
cluster_var#var on /var type advfs (rw)
/proc on /proc type procfs (rw)
# vi /etc/sysconfigtab
vm:
swapdevice=/dev/disk/dsk1594b
clubase:
cluster_seqdisk_major=19
cluster_seqdisk_minor=28370
# mount root1_domain#root /mnt
# vi /mnt/etc/sysconfigtab
vm:
swapdevice=/dev/disk/dsk1593b
clubase:
cluster_seqdisk_major=19
cluster_seqdisk_minor=28354
# umount /mnt
168
Configure the new quorum device, adjust expected votes, and reboot
# exit
# clu_quorum -f -d remove
Collecting quorum data for Member(s): 1 2
CNX MGR: Delete quorum disk operation completed with quorum.
Quorum disk successfully removed.
# clu_quorum -f -d add dsk1547 1
Collecting quorum data for Member(s): 1 2
169
Related documentation
Compaq/HP documentation:
http://www.tru64unix.compaq.com/docs/pub_page/doc_list.html
http://www.tru64unix.compaq.com/docs/base_doc/DOCUMENTATION/V51A_HTML/REF_LIB.HTM
http://www.tru64unix.compaq.com/docs/pub_page/V51A_DOCS/ADM_DOCS.HTM
http://www.tru64unix.compaq.com/docs/pub_page/cluster_list.html
http://www.tru64unix.compaq.com/docs/cluster_doc/cluster_51A/TCR51A_DOC.HTM
170
Index
A
addressing
fabric 64
FC-AL 63
logical units 65
peripheral devices 65
SCSI-3 65
volume sets 65
addressing Symmetrix devices 63
advanced filesystem. See AdvFS
AdvFS
creating and mounting 27
described 26
AdvFS domain, reconstructing 28
AlphaServer host 69
asemgr 77
AvailableServer 76
AvailableServer Environment (ASE) 76
B
boot device, configuring
Symmetrix/Fibre Channel 57
Symmetrix/SCSI 69
VNX/CLARiiON 115
boot device, Symmetrix 57
boot LUN, VNX/CLARiiON, binding 116
BOOTDEF_DEV, setting 120
C
CAA 86
Command Descriptor Blocks (CDB) 65
commands
emxmgr 128
hwmgr 62
Comments 16
Compaq HBA, installing
for Symmetrix/Fibre Channel 56
for Symmetrix/SCSI 69
for VNX/CLARiiON 102
connection properties, setting for
VNX/CLARiiON 109
D
device entry, Symmetrix, adding to host file 59
device masking, Symmetrix 54
device naming
Symmetrix 24
VNX/CLARiiON 128
devices, Symmetrix 62
addressing 63
labeling 25
partitioning 25
devices, VNX/CLARiiON, adding 129
direct access device 66
director ports, Fibre Channel 53
documentation, related 14
documentation, Tru64 UNIX 14, 20
driver, Fibre Channel, upgrading
for Symmetrix 58
for VNX/CLARiiON 107
dsfmgr 23
E
emxmgr 128
error messages, Tru64 UNIX 34
EMC Host Connectivity Guide for Tru64 UNIX
171
Index
H
HBA, Compaq, installing
for Symmetrix/Fibre Channel 56
for Symmetrix/SCSI 69
for VNX/CLARiiON 102
high availability 76
host configuration
Symmetrix/Fibre Channel 56
Symmetrix/SCSI 69
VNX/CLARiiON 102
hwmgr command 23, 62
I
iostat 23
K
kernel, rebuilding
for Symmetrix/Fibre Channel 58
for VNX/CLARiiON 107
L
labeling Symmetrix devices 25
LIP 63
Logical Storage Manager. See LSM
logical unit addressing 65
logical volumes 68
loop initialization process 63
LSM 30
LSM examples
creating a 4-way striped volume 32
creating a mirrored (2-way) volume 31
setting up 30
LUN expansion 28
LUN support
Symmetrix/Fibre Channel 52
LUN trespassing and path failover,
VNX/CLARiiON 130
LUN, boot, VNX/CLARiiON, binding 116
172
M
management station, defined 101
mkdmn 23
mkfset 23
multipath configurations, for VNX/CLARiiON
130
N
naming devices
Symmetrix 24
VNX/CLARiiON 128
newfs 23
P
partitioning Symmetrix devices 25
patches, Tru64 UNIX 14, 20
peripheral device addressing 65
persistent reservations 54
enabling for VNX/CLARiiON 122
Production Server 76
S
SCSI-3 FCP 63
scsimgr 23
serial number, Symmetrix 62
SRM console boot device, preparing 117
storage system components 100
Symmetrix
device 62
device masking 54
disk devices 68
OVMS 53
serial number 62
Symmetrix configuration
Fibre Channel 53
logical-to-physical volume splits 68
Symmetrix devices, addressing 63
system messages, Tru64 UNIX 34
T
Tru64 Fibre Channel driver 58
Tru64 UNIX
Index
V
VCMDB device masking guidelines 54
VNX/CLARiiON configuration 101
volume set addressing 65
W
Worldwide Port Name (WWPN) 64
Z
zone, preliminary, establishing
(VNX/CLARiiON) 116
zoning, planning
for Symmetrix 56
for VNX/CLARiiON 108
173
Index
174