Professional Documents
Culture Documents
Abstract
This implementation guide provides information for establishing communications between an HP 3PAR Storage System and a
Solaris 8, 9, 10, or 11 host running on the SPARC, x64, and x86 platforms. General information is also provided on the basic
steps required to allocate storage on the HP 3PAR Storage System that can then be accessed by the Solaris host.
Contents
1 Introduction...............................................................................................6
Supported Configurations..........................................................................................................6
InForm OS Upgrade Considerations............................................................................................6
Audience.................................................................................................................................6
Related Documentation..............................................................................................................7
Typographical Conventions........................................................................................................7
Advisories................................................................................................................................8
Contents
A Configuration Examples............................................................................76
Example of Discovering a VLUN Using qlc/emlx Drivers with SSTM...............................................76
Example of Discovering a VLUN Using an Emulex Driver and VxVM..............................................76
Example of Discovering a VLUN Using a QLogic Driver with VxVM...............................................77
Example of UFS/ZFS File System Creation..................................................................................77
Examples of Growing a Volume................................................................................................78
Growing an SSTM Volume..................................................................................................78
Growing a VxVM Volume...................................................................................................80
VxDMP Command Examples....................................................................................................82
Displaying I/O Statistics for Paths........................................................................................82
Managing Enclosures.........................................................................................................82
Changing Policies..............................................................................................................83
Accessing VxDMP Path Information......................................................................................83
Listing Controllers..........................................................................................................83
Displaying Paths............................................................................................................83
B Patch/Package Information........................................................................85
Minimum Patch Requirements for Solaris Versions........................................................................85
Patch Listings for Each SAN Version Bundle................................................................................87
HBA Driver/DMP Combinations...............................................................................................89
Minimum Requirements for a Valid QLogic qlc + VxDMP Stack................................................89
Minimum Requirements for a Valid Emulex emlxs + VxDMP Stack.............................................90
Default MU level Leadville Driver Table.................................................................................90
C FCoE-to-FC Connectivity............................................................................92
Contents
1 Introduction
This implementation guide provides information for establishing communications between an
HP 3PAR Storage System and a Solaris 8, 9, 10, or 11 host running on the SPARC, x64, and x86
platforms. General information is also provided on the basic steps required to allocate storage on
the HP 3PAR Storage System that can then be accessed by the Solaris host.
The information contained in this implementation guide is the outcome of careful testing of the
HP 3PAR Storage System with as many representative hardware and software configurations as
possible.
Required
For predictable performance and results with your HP 3PAR Storage System, the information in
this guide must be used in concert with the documentation set provided by HP for the HP 3PAR
Storage System and the documentation provided by the vendor for their respective products.
Required
All installation steps should be performed in the order described in this implementation guide.
Supported Configurations
The following types of host connections are supported between the HP 3PAR Storage System and
hosts running a Solaris OS:
Fibre Channel
iSCSI
For information about supported hardware and software platforms, see the HP Single Point of
Connectivity Knowledge (SPOCK) website:
http://www.hp.com/storage/spock
Audience
This implementation guide is intended for system and storage administrators who monitor and
direct system configurations and resource allocation for the HP 3PAR Storage System.
The tasks described in this guide assume that the administrator is familiar with Sun Solaris and the
InForm OS.
Although this guide attempts to provide the basic information that is required to establish
communications between the HP 3PAR Storage System and the Sun Solaris host, and to allocate
the required storage for a given configuration, the appropriate HP documentation must be consulted
in conjunction with the Solaris host and host bus adapter (HBA) vendor documentation for specific
details and procedures.
Introduction
NOTE: This implementation guide is not intended to reproduce any third-party product
documentation. For details about devices such as host servers, HBAs, fabric switches, and
non-HP 3PAR software management tools, consult the appropriate third-party documentation.
Related Documentation
The following documents also provide information related to the HP 3PAR Storage System and the
InForm OS:
For information about
Read the
Typographical Conventions
This guide uses the following typographical conventions:
Typeface
Meaning
Example
ABCDabcd
ABCDabcd
ABCDabcd
# cd \opt\3par\console
<ABCDabcd>
[ABCDabcd]
Related Documentation
Advisories
To avoid injury to people or damage to data and equipment, be sure to observe the cautions and
warnings in this guide. Always be careful when handling any electrical equipment.
NOTE:
guide.
Notes are reminders, tips, or suggestions that supplement the procedures included in this
Required
Requirements signify procedures that must be followed as directed in order to achieve a functional
and supported implementation based on testing at HP.
WARNING! Warnings alert you to actions that can cause injury to people or irreversible damage
to data or the operating system.
CAUTION:
Introduction
Cautions alert you to actions that can cause damage to equipment, software, or data.
Required
If you are setting up a fabric along with your installation of the HP 3PAR Storage System, see
Setting Up and Zoning the Fabric (page 14) before configuring or connecting your HP 3PAR
Storage System.
Required
The following setup must be completed before connecting the HP 3PAR Storage System port to a
device.
1.
To determine if a port has already been configured for a host port in fabric mode, issue
showport -par on the HP 3PAR Storage System.
# showport -par
N:S:P Connmode ConnType CfgRate MaxRate Class2 UniqNodeWwn VCN IntCoal
0:0:1 disk loop auto 2Gbps disabled disabled disabled enabled
0:0:2 disk loop auto 2Gbps disabled disabled disabled enabled
0:0:3 disk loop auto 2Gbps disabled disabled disabled enabled
0:0:4 disk loop auto 2Gbps disabled disabled disabled enabled
0:4:1 host point auto 4Gbps disabled disabled disabled enabled
0:4:2 host point auto 4Gbps disabled disabled disabled enabled
0:5:1 host point auto 2Gbps disabled disabled disabled enabled
0:5:2 host loop auto 2Gbps disabled disabled disabled enabled
0:5:3 host point auto 2Gbps disabled disabled disabled enabled
0:5:4 host loop auto 2Gbps disabled disabled disabled enabled
1:0:1 disk loop auto 2Gbps disabled disabled disabled enabled
1:0:2 disk loop auto 2Gbps disabled disabled disabled enabled
1:0:3 disk loop auto 2Gbps disabled disabled disabled enabled
1:0:4 disk loop auto 2Gbps disabled disabled disabled enabled
1:2:1 host point auto 2Gbps disabled disabled disabled enabled
1:2:2 host loop auto 2Gbps disabled disabled disabled enabled
1:4:1 host point auto 2Gbps disabled disabled disabled enabled
1:4:2 host point auto 2Gbps disabled disabled disabled enabled
2.
If the port has not been configured, take the port offline before configuring it for connection
to a host server. To take the port offline, issue the InForm OS CLI command controlport
offline <node:slot:port>.
# controlport offline 1:5:1
3.
To configure the port to the host server, issue controlport config host ct point
<node:slot:port>, where -ct point indicates that the connection type specified is a
fabric connection. For example:
# controlport config host ct point 1:5:1
4.
10
2.
To verify that the host has been created, issue the showhost command.
# showhost
Id Name Persona -WWN/iSCSI_Name- Port
6 solarishost Generic 1122334455667788 --1122334455667799 ---
SSTM/MXPIO
VxDMP
Solaris 8
Solaris 9
Solaris 10
Solaris 11
NOTE: Host persona 6 is automatically assigned following a rolling upgrade from InForm
OS 2.2.x.
If appropriate, you can change host persona 6 after an upgrade to the appropriate value as
shown out in Table 1 (page 11).
Host personas 1 and 2 enable two functional features:
UARepLun, which notifies the host of newly exported VLUNs and triggers a LUN discovery
request on the host, making the VLUN automatically available in format.
Required
The following setup must be completed before connecting the HP 3PAR Storage System port to a
device.
11
Verify port personality 4, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
1:3:2 loop auto 4Gbps disable disabled (4) emx, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
0:5:1 point auto 4Gbps disable enabled (7) g_ven, g_hba, g_os, 0, FA enabled
Verify port personality 1, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable enabled (1) g_ven, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
0:5:1 point auto 4Gbps disable enabled (7) g_ven, g_hba, g_os, 0, FA enabled
12
Verify port personality 1, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable enabled (1) g_ven, g_hba, g_os, 0, DC enabled
Verify port personality 9, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
0:5:1 point auto 4Gbps disable enabled (9) g_ven,g_hba, g_os, 0, FA enabled
Verify port personality 3, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable disabled (3) jni, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona----------- IntCoal
0:5:1 point auto 4Gbps disable disabled *(7) g_ven,g_hba, g_os, 0, FA enabled
13
WARNING! The Controlport Offline command for the HP 3PAR Storage System LSI 929
HBA requires firmware versions greater than 02.00.21 when connected to a JNI Tachyon host
HBA.
Verify port personality 3, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable disabled (3) jni, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate
MaxRat Class2
VCN -----------Persona------------ IntCoal.
0:4:1 point auto 4Gbps disable enabled(7) g_ven, g_hba, g_os, 0, FA enabled
2.
To verify that the host has been created, issue the showhost command.
# showhost
Id Name
-WWN/iSCSI_Name- Port
0 sqa-solaris 1122334455667788 --1122334455667799 ---
14
You can set up fabric zoning by associating the device World Wide Names (WWNs) or the switch
ports with specified zones in the fabric. Although you can use either the WWN method or the port
zoning method with the HP 3PAR Storage System, the WWN zoning method is recommended
because the zone survives the changes of switch ports when cables are moved around on a fabric.
Required
Employ fabric zoning, using the methods provided by the switch vendor, to create relationships
between host server HBA ports and storage server ports before connecting the host server HBA
ports or HP 3PAR Storage System ports to the fabric(s).
Fibre Channel switch vendors support the zoning of the fabric end-devices in different zoning
configurations. There are advantages and disadvantages with each zoning configuration. Choose
a zoning configuration based on your needs.
The HP 3PAR arrays support the following zoning configurations:
One initiator to multiple targets per zone (zoning by HBA). This zoning configuration is
recommended for the HP 3PAR Storage System. Zoning by HBA is required for coexistence
with other HP Storage arrays.
NOTE: The storage targets in the zone can be from the same HP 3PAR Storage System,
multiple HP 3PAR Storage Systems, or a mixture of HP 3PAR and other HP storage systems.
For more information about using one initiator to multiple targets per zone, see Zoning by HBA
in the Best Practices chapter of the HP SAN Design Reference Guide. This document is available
on the HP BSC website:
http://www.hp.com/go/3par/
If you use an unsupported zoning configuration and an issue occurs, HP may require that you
implement one of the supported zoning configurations as part of the troubleshooting or corrective
action.
After configuring zoning and connecting each host server HBA port and HP 3PAR Storage System
port to the fabric(s), verify the switch and zone configurations using the InForm OS CLI showhost
command, to ensure that each initiator is zoned with the correct target(s).
HP 3PAR Coexistence
The HP 3PAR Storage System array can coexist with other HP array families.
For supported HP arrays combinations and rules, see the HP SAN Design Reference Guide, available
on the HP BSC website:
http://www.hp.com/go/3par/
Brocade switch ports that connect to a host server HBA port or to an HP 3PAR Storage System
port should be set to their default mode. On Brocade 3xxx switches running Brocade firmware
3.0.2 or later, verify that each switch port is in the correct mode using the Brocade telnet
interface and the portcfgshow command, as follows:
brocade2_1:admin> portcfgshow
Ports
0 1 2 3
4 5 6 7
-----------------+--+--+--+--+----+--+--+-Speed
AN AN AN AN
AN AN AN AN
Trunk Port
ON ON ON ON
ON ON ON ON
Setting Up and Zoning the Fabric
15
Locked L_Port
.. ..
Locked G_Port
.. ..
Disabled E_Port
.. ..
where AN:AutoNegotiate,
.. ..
.. ..
.. ..
..:OFF,
.. .. .. ..
.. .. .. ..
.. .. .. ..
??:INVALID.
The following fill-word modes are supported on a Brocade 8 G/s switch running FOS firmware
6.3.1a and later:
admin>portcfgfillword
Usage: portCfgFillWord PortNumber Mode [Passive]
Mode: 0/-idle-idle
- IDLE in Link Init, IDLE as fill word (default)
1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word
2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW)
3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF
HP recommends that you set the fill word to mode 3 (aa-then-ia), which is the preferred
mode using the portcfgfillword command. If the fill word is not correctly set, er_bad_os
counters (invalid ordered set) will increase when you use the portstatsshow command
while connected to 8 G HBA ports, as they need the ARBFF-ARBFF fill word. Mode 3 will
also work correctly for lower-speed HBAs, such as 4 G/2 G HBAs. For more information, see
the Fabric OS command Reference Manual supporting FOS 6.3.1a and the FOS release
notes.
In addition, some HP switches, such as the HP SN8000B 8-slot SAN backbone director switch,
the HP SN8000B 4-slot SAN director switch, the HP SN6000B 16 Gb FC switch, or the HP
SN3000B 16 Gb FC switch automatically select the proper fill-word mode 3 as the default
setting.
McDATA switch or director ports should be in their default modes as type GX-Port with a
speed setting of Negotiate.
Cisco switch ports that connect to HP 3PAR Storage System ports or host HBA ports should
be set to AdminMode = FX and AdminSpeed = auto port, with the speed set to auto negotiate.
16
Maximum of 64 host server ports per HP 3PAR Storage System port, with a maximum total of
1,024 host server ports per HP 3PAR Storage System.
I/O queue depth on each HP 3PAR Storage System HBA model, as follows:
The I/O queues are shared among the connected host server HBA ports on a first-come,
first-served basis.
When all queues are in use and a host HBA port tries to initiate I/O, it receives a target queue
full response from the HP 3PAR Storage System port. This condition can result in erratic I/O
performance on each host server. If this condition occurs, each host server should be throttled
so that it cannot overrun the HP 3PAR Storage System port's queues when all host servers are
delivering their maximum number of I/O requests.
NOTE: When host server ports can access multiple targets on fabric zones, the assigned
target number assigned by the host driver for each discovered target can change when the
host server is booted and some targets are not present in the zone. This situation may change
the device node access point for devices during a host server reboot. This issue can occur
with any fabric-connected storage, and is not specific to the HP 3PAR Storage System.
17
Each HP 3PAR Storage System iSCSI target port that will be connected to an iSCSI initiator must
be set up appropriately for your configuration, as described in the following steps.
The following example shows the default HP 3PAR Storage System 1 Gb iSCSI port settings, before
configuration:
# showport -iscsi
N:S:P State IPAddr Netmask Gateway TPGT
0:3:1 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0
0:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0
1:3:1 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0
1:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0
18
MTU
131
132
131
132
Rate
1500
1500
1500
1500
NOTE: A 10 Gb iSCSI (only) requires a one-time configuration using the controlport command.
Use the showport and showport -i commands to verify the configuration setting. Example:
# showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol
0:3:1 suspended config_wait - - cna 0:3:2 suspended config_wait - - cna # showport -i
N:S:P Brand Model Rev Firmware Serial HWType
0:3:1 QLOGIC QLE8242 58 0.0.0.0 PCGLT0ARC1K3SK CNA
0:3:2 QLOGIC QLE8242 58 0.0.0.0 PCGLT0ARC1K3SK CNA
Each HP 3PAR Storage System iSCSI target port that will be connected to an iSCSI Initiator must
be set up appropriately for your configuration as described in the following steps.
1. Set up the IP and netmask address on the iSCSI target port using the InForm OS CLI
controliscsiport command. Here is an example:
# controliscsiport addr 10.1.0.110 255.0.0.0 -f 0:3:1
# controliscsiport addr 11.1.0.110 255.0.0.0 -f 1:3:1
2.
To verify the iSCSI target port configuration, issue the InForm OS CLI showport -iscsi
command.
# showport -iscsi
N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP iSNS_Prim iSNS_Sec iSNS_Port
0:3:1 ready 10.1.0.110 255.0.0.0 0.0.0.0 31 1500 1Gbps 0 0.0.0.0 0.0.0.0 3205
0:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 32 1500 n/a 0 0.0.0.0 0.0.0.0 3205
1:3:1 ready 11.1.0.110 255.0.0.0 0.0.0.0 131 1500 1Gbps 0 0.0.0.0 0.0.0.0 3205
1:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 132 1500 n/a 0 0.0.0.0 0.0.0.0 3205
NOTE: Make sure the IP switch ports, (where the HP 3PAR Storage System iSCSI target ports
and iSCSI Initiators host are connected), are able to communicate with each other. If the host
is already connected to the IP fabric or switch and its Ethernet interface has been configured,
you can use the ping command for this purpose on the Solaris host.
Configuring the HP 3PAR Storage System iSCSI Ports
19
3.
If the Solaris host uses the Internet Storage Name Service (iSNS) to discover the target port,
configure the iSNS server IP Address on the target port by issuing the InForm OS CLI
controliscsiport command with the isns parameter.
# controliscsiport isns 11.0.0.200 -f 1:3:1
# showport -iscsi
N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP
iSNS_Prim iSNS_Sec iSNS_Port
1:3:1 ready 11.1.0.110 255.0.0.0 0.0.0.0 31 1500 1Gbps 0
11.0.0.200 0.0.0.0 3205
- - -
NOTE: The Solaris OS does not have its own iSNS server, so a Windows server that has
been installed with the iSNS feature must be used to provide the iSNS server functions instead.
4.
Each HP 3PAR Storage System iSCSI port has a unique name, port location, and serial number
as part of its iqn iSCSI name. Use the InForm OS CLI showport command with the
-iscsiname parameter to get the iSCSI name.
# showport -iscsiname
N:S:P IPAddr ---------------iSCSI_Name---------------0:3:1 10.1.0.110 iqn.2000-05.com.3pardata:20310002ac00003e
0:3:2 0.0.0.0 iqn.2000-05.com.3pardata:20320002ac00003e
1:3:1 11.1.0.110 iqn.2000-05.com.3pardata:21310002ac00003e
1:3:2 0.0.0.0 iqn.2000-05.com.3pardata:21320002ac00003e
5.
Use the ping command on the Solaris host to verify that the HP 3PAR Storage System target
is pingable, and use the route get <IP> command to check that the configured network
interface is used for the destination route.
Example: After configuring the host and HP 3PAR Storage System ports, 11.1.0.110 is the
HP 3PAR Storage System target IP Address, 11.1.0.40 is host IP Address and the host uses
a ce2 network interface to route the traffic to the destination.
# ping 11.1.0.110
11.1.0.110 is alive
# route get 11.1.0.110
route to: 11.1.0.110
destination: 11.0.0.0
mask: 255.0.0.0
interface: ce2
flags: <UP,DONE>
As an alternative, you can use controliscsiport to ping the host from the HP 3PAR
Storage System ports.
# controliscsiport ping [<count>] <ipaddr> <node:slot:port>
# controliscsiport ping 1 11.1.0.40 1:3:1
Ping succeeded
For information on setting up target discovery on the Solaris host, see Section (page 47).
20
The following steps show how to create the host definition for an iSCSI connection.
1. You can verify that the iSCSI Initiator is connected to the iSCSI target port by using the InForm
OS CLI showhost command.
# showhost
Id Name
--
2.
Create an iSCSI host definition entry by issuing the InForm OS CLI createhost -iscsi
<hostname> <host iSCSI name> command.
# createhost -iscsi solaris-host-01 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
Setting default host persona 1 (Generic)
# showport -iscsi
N:S:P State IPAddr
Netmask
iSNS_Prim
iSNS_Sec
iSNS_Port
0:3:1 ready 10.100.0.101
255.0.0.0
0.0.0.0
0.0.0.0
3205
1:3:1 ready 10.101.0.201
255.0.0.0
0.0.0.0
0.0.0.0
3205
Gateway TPGT
MTU
Rate DHCP
0.0.0.0 31
1500
1Gbps 0
0.0.0.0 131
1500
1Gbps 0
CAUTION: If, when Host Explorer is installed, /usr/local is a symbolic link, this link will
be removed and be replaced by a directory. This may affect some applications. To prevent
this, reply No when asked, during installation, Do you want to install these
conflicting files?. Host Explorer will then install normally.
Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x
21
NOTE: HP recommends host persona 2 for Solaris 11 and host persona 1 for Solaris 8, 9,
and 10 (all supported MU levels). Host persona 1 for Solaris 10 is required to enable Host
Explorer functionality. However, host persona 6 is automatically assigned following a rolling
upgrade from 2.2.x. If appropriate, you can change host persona 6 after an upgrade to host
persona 2 for Solaris 11 or host persona 1 for Solaris 10. Host persona 1 enables Host
Explorer, which requires the SESLun element of Host persona 1. Newly exported VLUNs can
be seen in format by issuing devfsadm -i iscsi. To register the data VLUN 254 on
Solaris format, a host reboot is required.
NOTE: You must configure the HP 3PAR Storage System iSCSI target port(s) and establish
an iSCSI Initiator connection/session with the iSCSI target port from the host to be able to
create a host definition entry. For details, see Configuring the Host for an iSCSI Connection
(page 44).
3.
Persona ---------------WWN/iSCSI_Name---------------
Port
Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
0:3:1
Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
1:3:1
# showiscsisession
N:S:P --IPAddr--- TPGT TSIH Conns -----------------iSCSI_Name-----------------------StartTime------0:3:1 10.105.3.10
31 11351
1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
2010-02-25 07:47:38 PST
1:3:1 10.105.4.10 131 11351
1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
2010-02-25 07:47:37 PST
You will need the Host iqn name/names to create the iSCSI host definition on the HP 3PAR
Storage System. The following steps show how to create the host definition for an iSCSI connection.
1. You can verify that the iSCSI Initiator is connected to the iSCSI target port by using the InForm
OS CLI showhost command.
# showhost
Id Name -----------WWN/iSCSI_Name------------ Portiqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1
iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1
22
2.
Create an iSCSI host definition entry by issuing the InForm OS CLI createhost -iscsi
command.
# createhost -iscsi solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d
3.
Unidirectional or Host CHAP authentication is used when the HP 3PAR Storage System iSCSI
target port authenticates the iSCSI Host initiator when it tries to connect.
Bidirectional (Mutual) CHAP authentication adds a second level of security where both the
iSCSI target and host authenticate each other when the host tries to connect to the target.
NOTE:
23
b.
c.
Enable CHAP as the authentication method after the secret key is set.
# iscsiadm modify initiator-node --authentication CHAP
d.
e.
NOTE: In the example above, the default target CHAP Name is the target port iSCSI name
(iqn.2000-05.com.3pardata:21310002ac00003e) and host CHAP Name is the initiator
port iSCSI name (iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d).
f.
24
g.
Invoke devfsadm to discover the devices after the host is verified by the target.
# devfsadm -i iscsi
25
NOTE: The Target Chap name is set by default to the HP 3PAR Storage System name. Use
the InForm OS CLI showsys command to determine the HP 3PAR Storage System name.
2.
26
3.
Enable the Host CHAP authentication after the secret key is set.
# iscsiadm modify initiator-node --authentication CHAP
4.
5.
Enter the Target Chap secret key target_secret0 for each connected target.
# iscsiadm modify target-param --CHAP-secret
iqn.2000-05.com.3pardata:21310002ac00003e
<prompts for secret key>
# iscsiadm modify target-param --CHAP-secret
iqn.2000-05.com.3pardata:20310002ac00003e
<prompts for secret key>
6.
7.
Set the CHAP name for the HP 3PAR Storage System for the iSCSI targets (Use the InForm OS
CLI showsys command to determine the HP 3PAR Storage System name).
# iscsiadm modify target-param --CHAP-name s062
iqn.2000-05.com.3pardata:21310002ac00003e
# iscsiadm modify target-param --CHAP-name s062
iqn.2000-05.com.3pardata:20310002ac00003e
8.
9.
Remove and create a new iSCSI session and invoke devfsadm -i iscsi to discover the
targets and all the LUNs.
Configuring CHAP Authentication (Optional)
27
NOTE: CHAP authentication will not be in effect for the most recently added devices until
the current connection is removed and a new connection session is enabled. To enable
authentication for all the devices, stop all associated I/O activity and unmount any file systems
before creating the new connection session. This procedure is required each time a change
is made to the CHAP configuration.
2.
On the host, disable and remove the target CHAP authentication on each target.
# iscsiadm list target
# iscsiadm modify target-param -B disable iqn.2000-05.com.3pardata:21310002ac00003e
# iscsiadm modify target-param -B disable iqn.2000-05.com.3pardata:20310002ac00003e
# iscsiadm modify target-param --authentication NONE
iqn.2000-05.com.3pardata:21310002ac00003
# iscsiadm modify target-param --authentication NONE
iqn.2000-05.com.3pardata:20310002ac00003e
# iscsiadm modify initiator-node --authentication NONE
3.
28
Solaris 8/9
Install the appropriate Sun SAN software package for Solaris 8 or 9 hosts available on the following
website:
http://www.oracle.com/us/products/servers-storage/storage/
storage-networking/index.htm
Consult the Solaris OS minimum patch listings in Chapter 6 (page 53).
Emulex LPFC driver package(s) and driver installation instructions are available at:
http://www.emulex.com/
QLogic QLA (qla2300) driver package(s) and driver installation instructions are available at:
http://www.qlogic.com/
NOTE: The SAN package may have an updated release of the emlxs /qlc drivers (also known
as the Leadville drivers).
For JNI HBAs, install the JNIfcaPCI (FCI-1063) or JNIfcaw (FC64-1063) driver package for the
Solaris OS. The driver install package files fca-pci.pkg and fcaw.pkg contain the JNIfcaPCI,
JNIfcaw and JNIsnia drivers.
29
NOTE: The JNI HBA drivers are currently only supported for Solaris OS versions 8 and 9 in
InForm OS 2.3.x. Refer to the InForm OS support matrices for updated information and support
for Solaris 10.
For more details, consult the appropriate driver installation notes in this section for the type of HBA
being installed. You can also consult the HP SPOCK website to determine which drivers are
appropriate for a given HBA or version of the Solaris OS:
www.hp.com/storage/spock
Direct Connect
Configured by editing /kernel/drv/lpfc.conf and then running the udated_drv utility. On
versions of Solaris earlier than version 9, you have to manually reboot the host server to update
the host with the modified driver configuration settings.
Fabric Connect
Configured by editing /kernel/drv/lpfc.conf and then running the udated_drv utility. On
versions of Solaris earlier than version 9, you have to manually reboot the Solaris host to update
with the modified driver configuration settings. The sd.conf file is read by the SD driver at boot
time, so supporting entries for new LUNs must exist prior to the last server reboot.
Add entries to the /kernel/drv/sd.conf file between the boundary comments generated by
the Emulex driver package during installation.
# Start lpfc auto-generated configuration -- do NOT alter or delete this line
name="sd" parent="lpfc" target=0 lun=0;
name="sd" parent="lpfc" target=0 lun=1;
...
name="sd" parent="lpfc" target=0 lun=255;
# End lpfc auto-generated configuration -- do NOT alter or delete this line
A line is required for each LUN number (pre 6.20 driver requirement). For fabric configurations,
entries must be made for all target LUNs that will be exported from the HP 3PAR Storage System
to the Solaris host. These entries can be restricted to the Emulex lpfc driver only, so a useful strategy
is to add entries for all possible LUNs (0 to 255) on target 0. Testing at HP did not reveal any
noticeable increase in server boot time due to the probing of non-existent LUNs.
WARNING! Installing version 6.21g of the lpfc driver for Solaris may be significantly different
than in previous releases. Follow the driver instructions precisely as instructed for initial installation.
Failure to follow the proper installation steps could render your system inoperable.
30
NOTE: Emulex lpfc drivers 6.20 and above do not require LUN and Target entries in the
/kernel/drv/sd.conf file. The lpfc driver can support up to 256 targets, with a maximum of
256 LUNs per target; additional LUNs will not be visible on the host. Solaris 8/9 LUN discovery
for driver 6.21g requires the following command:
/opt/HBAnyware/hbacmd RescanLuns <hba WWPN> <target WWPN>
HBAnyware software is available from the Emulex lpfc driver download site: http://
www.emulex.com/
NOTE: When adding specific entries in the sd.conf file for each LUN number that is expected
to be exported from the HP 3PAR Storage System ports, new entries have to be added each time
additional VLUNs are exported with new LUNs. Unless the host port will be communicating with
more than one HP 3PAR Storage System port, Target=0 entries are sufficient. If a host port is
communicating with more than a single HP 3PAR Storage System, then specific entries are required
for the other targets (pre 6.20 driver requirement).
WARNING! Any changes to the driver configuration file must be tested before going into a
production environment.
WARNING! Any changes to the driver configuration file must be tested before going into a
production environment.
WARNING! DO NOT LOWER the qla2300.conf variable hba0-link-down-timeout
below 30 seconds for Solaris 9 hosts.
Installing the HBA Drivers
31
Direct Connect
Configured by editing the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files:
fca_nport = 0; # initialize on a loop
public_loop = 0; # initialize according to what fca_nport is set to
Fabric Connect
Configured by editing the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files:
fca_nport = 1; # initialize as N_Port
public_loop = 0; # initialize according to what fca_nport is set to
The JNIsnia package is included with the driver installation but is optional and is not required to
access the HP 3PAR Storage System from the Solaris host. The driver packages and driver installation
instructions are available at: http://www.amcc.com
The fca-pci.conf and fcaw.conf files will be installed in the /kernel/ drv directory when
the driver package is installed.
In both direct connect and fabric configurations, (where each host HBA port logically connects to
only one HP 3PAR Storage System port), each initiator (host server HBA port) can only discover
one target (HP 3PAR Storage System port).
For these configurations, persistent target binding in the HBA driver, although possible, is not
required since there will only be one target found by each host HBA driver instance. When the
binding parameters in the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files
32
are left at their default settings, each instance of the JNI driver will automatically discover one
HP 3PAR Storage System port and assign it a target value of 0 each time the Solaris host is booted.
The following example shows the default fca-pci.conf settings:
def_hba_binding = "fca-pci*";
def_wwpn_binding = "$xxxxxxxxxxxxxxxx";
def_wwnn_binding = "$xxxxxxxxxxxxxxxx";
def_port_binding = "xxxxxx";
Default fcaw.conf settings:
def_hba_binding = "fcaw*";
def_wwpn_binding = "$xxxxxxxxxxxxxxxx";
def_wwnn_binding = "$xxxxxxxxxxxxxxxx";
def_port_binding = "xxxxxx";
If changes in the mapping of a device to its device node (/dev/rdsk/cxtxdx) cannot be tolerated
for your configuration, you can assign and lock target IDs based on the HP 3PAR Storage System
port's World Wide Port Name by adding specific target binding statements in the
/kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf file. Refer to the fca-pci or fcaw driver
documentation and the /opt/JNIfcaPCI/technotes or /opt/JNIfcaw/technotes files
for more information about mapping discovered targets to specific target IDs on the host.
The Solaris sd SCSI driver will only probe for targets and LUNs that are configured in the /kernel/
drv/sd.conf file. For fabric configurations, entries must exist for all target/LUNs that are exported
from the HP 3PAR Storage System to the Solaris host. The sd.conf file is read by the sd driver
at boot time, so supporting entries for new LUNs must exist prior to the last server reboot. These
entries can be restricted to the JNI fca-pci or fcaw driver only, thus, a useful strategy is to add
entries for all possible LUNs (0 to 255) on target "0".
For instance, add the following entries to the sd.conf file:
JNI fcaw driver:
name="sd" parent="fcaw" target=0 lun=0;
...
name="sd" parent="fcaw" target=0 lun=255;
Testing at HP did not reveal any noticeable increase in server boot time due to the probing of
non-existent LUNs.
For some installations, you may want to place specific entries for the actual LUN numbers exported
from the HP 3PAR Storage System ports in the sd.conf file. However, this approach requires
additional entries and a reboot of the Solaris host when new VLUNs are later exported with new
LUN numbers.
33
NOTE: Each target/LUN entry in sd.conf for non-existent LUNs (a LUN that has not yet been
exported from the HP 3PAR Storage System) will result in probed fail messages from the fca-pci or
fcaw driver in the /var/adm/messages file and on the server console each time the driver scans
for devices in response to the Solaris devfsadm command. These messages can be minimized or
eliminated by populating /kern/drv/sd.conf with fewer entries. HP recommends that the
sd.conf file be populated with all possible target/LUN combinations that may be exported from
the HP 3PAR Storage System, despite the probe fail messages, to avoid having to reboot the host
server to register newly exported HP 3PAR Storage System LUNs.
NOTE: Target=0 entries are sufficient unless a host port will detect more than one HP 3PAR
Storage System port, or the one that is detected has been persistently bound to a different target
number. In this case, entries will be required for other targets.
The optional EZFibre GUI utility is available at http://www.amcc.com/. This utility provides a view
of each JNI HBA port on the server and the targets and LUNs each has acquired. This utility can
also be used to statically target bind discovered targets and LUNsif that is a requirement of your
specific configuration.
Direct Connect
Configured by editing the /kernel/drv/jnic146x.conf file:
FcLoopEnabled = 1;
FcFabricEnabled = 0;
automap = 2;
Fabric Connect
Configured by editing the /kernel/drv/jnic146x.conf file:
FcLoopEnabled = 0;
FcFabricEnabled = 1;
automap = 1;
Install the JNI driver package version 5.3.1.3. The driver install package file JNIC146x.pkg
contains the JNIC146x and JNIsnia packages. The JNIsnia package is optional and is not
required to access the HP 3PAR Storage System from the Solaris host. The driver packages and
driver installation instructions are available at http://www.amcc.com. The jnic146x.conf file
34
will be installed in the /kernel/drv directory as the driver package is installed. Edit the /kernel/
drv/jnic146x.conf file by adding the following entries.
Direct Connect
FcLoopEnabled=1;# disable loop mode
FcFabricEnabled=0;# enable fabric mode
automap=2;# automap target/LUN's
For Fabric Connect:
FcLoopEnabled=0;# disable loop mode
FcFabricEnabled=1;# enable fabric mode
automap=1;# automap target/LUN's
Unload and reload the jnic146x driver so that the edits to /kernel/drv/jnic146x.conf take
effect.
# /opt/JNIC146x/jnic146x_unload
# /opt/JNIC146x/jnic146x_load
Verify that each JNI HBA is loaded with FCode firmware version 3.91. There will be messages for
each HBA port in the /var/adm/messages file.
NOTE: If the HBAs are not using FCode firmware version 3.9.1 or later, upgrade the FCode
firmware. FCode firmware and installation instructions are available as install packages (specific
to each HBA model) from:
http://www.amcc.com/
JNIC146x driver versions 5.3 and greater do not require LUN and target entries in
the/kernel/drv/sd.conf file.
The optional EZFibre GUI utility is available at http://www.amcc.com/. This utility gives a view
of each JNI HBA port in the server and the targets and LUNs each has acquired. This utility can
also be used to statically target bind discovered targets and LUNs if that is a requirement of your
specific configuration.
Perform a reconfigure reboot of the host server (reboot -- -r) or create the file/reconfigure
so that the next server boot will be a reconfiguration boot.
# touch /reconfigure
35
Fabric Connect
FcLoopEnabled = 0; # use for fabric connections
FcFabricEnabled = 1; # use for fabric connections
automap = 1; # use for fabric connections
Relevant messages are recorded in the /var/adm/messages file for each port that has an
associated driver and can be useful for verification and troubleshooting.
NOTE: The Solaris-supplied emlxs driver may bind to the Emulex HBA ports and prevent the
Emulex lpfc driver from attaching to the HBA ports. Emulex provides an emlxdrv utility as part of
the "FCA Utilities" available for download from www.emulex.com. You can use the emlxdrv utility
to adjust the driver bindings on a per HBA basis on the server between the Emulex lpfc driver and
the Sun emlxs driver. You may need to use this utility if the lpfc driver does not bind to the Emulex
based HBAs upon reconfigure-reboot. Solaris 8 requires that the emlxdrv pkg be removed before
installing the lpfc driver.
Install the VRTS3par package from the VRTS3par_SunOS_50 distribution package for
Veritas Volume Manager versions 5.0 and 5.0MP1.
The following setting on the enclosure is required if long failback times are causing some concern.
This enclosure setting can be used with 5.0GA, 5.0MP1, 5.0MP3 and 5.1GA VxDMP:
# vxdmpadm setattr enclosure <name> recoveryoption=timebound iotimeout=60
If not set, I/O will eventually failback to the recovered paths. The default value for the enclosure
is "fixed retry=5".
To return the setting to default:
# vxdmpadm setattr enclosure <name> recoveryoption=default
WARNING! Failure to claim the HP 3PAR Storage System as an HP 3PAR array will affect the
way devices are discovered by the multipathing layer.
WARNING! The minimum supported software installation version for VxDMP_5.0MP3 is
VxDMP_5.0MP3_RP1_HF3 with vxdmpadm settune dmp_fast_recovery=off. This tunable
can be left at default values with later versions VxDMP_5.0MP3_RP2_HF1 and
VxDMP_5.0MP3_RP3.
CAUTION: You may need to reboot the host if you wish to reuse VLUN numbers with the following
VxDMP versions: VxDMP_5.0MP3_RP3 or VxDMP_ 5.1. Veritas has enhanced data protection
code which may be triggered if a VLUN number is reused, "Data Corruption Protection Activated".
Solaris 11
Edit the /kernel/drv/fp.conf file by removing the hash from the line so that it reads as follows:
mpxio-disable="no";
37
Solaris 8/9/10
Edit the /kernel/drv/scsi_vhci.conf file to allow the SSTM to recognize HP 3PAR VLUNs
on the host system (see section Section (page 38) for context change). An additional variable
change is required for Solaris 8 and 9 (see section Section (page 38) for context change.)
Solaris 10 and 11
Additionally, to enable the SSTM for all HBAs on Solaris 10 and 11 systems, issue the stmsboot
-e command to enable multipathing (stmsboot -d will disable multipathing).
CAUTION: When running Solaris 10 MU7, enabling SSTM on a fresh install using stmsboot
-e can corrupt the fp.conf configuration. To avoid this, issue stmsboot -d -D fp to disable
the fp mpxio. You should then be able to run stmsboot -e successfully without loss of the fp
HBA. For more information on this workaround, consult the following website:
http://bugs.opensolaris.org/bugdatabase
view_bug.do;jsessionid=8de823511efa700410638295d36c?bug_id=6811044
InForm OS 2.2.x
device-type-scsi-options-list =
"3PARdataVV", "symmetric-option";
symmetric-option = 0x1000000;
38
NOTE:
After editing the configuration file, perform a reconfiguration reboot of the Solaris host.
SPARC:
Issue reboot -- -r
x64/x86:
Create the /reconfigure file so that the next server boot will be a reconfiguration boot.
# touch /reconfigure
NOTE: For detailed installation instructions, consult the Solaris Fiber Channel and Storage
Multipathing Administration Guide, located on the following website:
http://www.sun.com/storage/san/
This document includes instructions for enabling Solaris I/O multipathing on specific Sun HBA
ports, but does not apply for other HBAs.
While the HP 3PAR Storage System is running, departing and returning HP 3PAR Storage System
ports (e.g., un-plugged cable) are tracked by their World Wide Port Name (WWPN). The WWPN
39
of each HP 3PAR Storage System port is unique and constant which ensures correct tracking of a
port and its LUNs by the host HBA driver.
However, in configurations where multiple HP 3PAR Storage System ports are available for
discovery, some specific target binding may be necessary. The following section describes
considerations for implementing persistent binding for each type of HBA that is supported by the
Solaris OS.
If a fabric zoning relationship exists such that a host HBA port has access to multiple targets (for
example, multiple ports on the HP 3PAR Storage System) the driver will assign target IDs (cxtxdx)
to each discovered target in the order that they are discovered. In this case, the target ID for a
given target can change as targets leave the fabric and return or when the host is rebooted while
some targets are not present. If changes in the mapping of a device to its device node (/dev/
rdsk/cxtxdx) cannot be tolerated for your configuration, you can assign and lock the target IDs
based on the HP 3PAR Storage System port's World Wide Port Name by adding specific target
binding statements in the /kernel/drv/qla2300.conf file. These statements associate a
specified target ID assignment to a specified WWPN for a given instance of the qla driver (a host
HBA port).
For example, to bind HP 3PAR Storage System WWPN 20310002ac000040 to target ID 6 for
qla2300 instance "0", you would add the following statement to /kernel/drv/qla2300.conf:
hba0-SCSI-target-id-6-fibre-channel-port-name="20310002ac000040";
With this binding statement active, a target with a WWPN of 20310002ac000040 that is
discovered on the host HBA port for driver instance 1, will always receive a target ID assignment
of 6, thus yielding a device node like the one shown in the following example.
/dev/rdsk/c4t6d20s2
hba0-persistent-binding-configuration=0; # 0 = Reports to OS discovery of binded and
40
non-binded devices
hba0-persistent-binding-by-port-ID=0; # Persistent binding by FC port ID disabled
The current HBA driver instance matching to discovered target WWPN associations (for connected
devices) can be obtained from entries in the /var/adm/messages file generated from the last
server boot.
# grep fibre-channel-port /var/adm/messages
sunb1k-01 qla2300: [ID 558211 kern.info]
hba0-SCSI-target-id-0-fibre-channel-portname="20310002ac000040";
sunb1k-01 qla2300: [ID 558211 kern.info]
hba1-SCSI-target-id-0-fibre-channel-portname="21510002ac000040";
New or edited binding statement entries can be made active without rebooting the Solaris host by
issuing the following command:
# /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig -d qla2300
CAUTION: This procedure should not be attempted while I/O is running through the qla driver
instances as it will briefly interrupt that I/O and may also change a discovered device's device
nodes if there have been changes made to the persistent binding statements.
While running with Persistent binding only enabled, only persistently bound targets and
their LUNs will be reported to the operating system.
If the persistent binding option is disabled in /kernel/drv/qla2300.conf, changes to persistent
target binding will only take effect during the next host server reboot.
hba0-persistent-binding-configuration=0;
While running with the persistent binding option disabled, both persistently bound targets and
their LUNs and non-bound targets and their LUNs are reported to the operating system.
For information about mapping discovered targets to specific target IDs on the host, consult the
/opt/QLogic_Corporation/drvutil/qla2300/readme.txt file that is loaded with the
qla driver.
For more information on setting the persistent target binding capabilities of the QLogic HBA qla
driver, consult the QLogic documentation that is available on the following website:
http://www.qlogic.com/
41
or /kernel/drv/fcaw.conf file. Refer to the fca-pci or fcaw driver documentation and file
/opt/JNIfcaPCI/technotes or /opt/JNIfcaw/technotes for more information about
mapping discovered targets to specific target IDs on the host.
There is a delay of fp_offline_ticker before fp tells fcp about the link outage (default 90
seconds). There is a further delay of fcp_offline_delay before fcp offlines LUNs (default 20
seconds). You can change these setting by making the necessary edits to the /kernel/drv/
fcp.conf and /kernel/drv/fp.conf files.
For example, you could edit the fcp.conf file with the following fcp_offline_delay setting
to change the timer to 10 seconds:
fcp_offline_delay=10;
Setting this value outside the range of 10 to 60 seconds will log a warning message to the /var/
adm/messages file.
Also edit the fp.conf file with the following fp_offline_ticker setting to change the timer
to 50:
fp_offline_ticker=50;
42
Setting this value outside the range of 10 to 90 seconds will log a warning message into the /var/
adm/messages file.
In the example above, the settings will reduce the timer by a total of (20 10) + (90 50) = 50
seconds.
Starting from Sun StorageTek SAN 4.4.11 and Solaris 10 U3, these parameters are made
tuneables. They can be tuned by modifying the respective driver.conf file. The range of allowed
values has been chosen considering the FC standards limits. Both can be tuned down - but not
below 10 seconds (the driver code will either enforce a minimum value of 10 seconds, or issue a
warning at boot time, or both).
WARNING! Tuning these parameters may cause adverse affect on the system. If you are optimizing
your storage configuration for stability, we recommend staying with the default values for these
tuneables. Any changes to these tuneables are made at your risk, and could have unexpected
consequences (e.g., fatal I/O errors when attempting to perform online firmware upgrades to
attached devices, or during ISL or other SAN reconfigurations). Changes could also affect system
performance due to excessive path failover events in the presence of minor intermittent faults etc.
You may need to test any changes for your standard config/environment and specific tests, and
determine the best 'tradeoff' between a quicker fail over and resilience to transient failures.
Refer to http://www.sun.com/bigadmin/features/hub_articles/tuning_sfs.pdf
for the implications of changes to your host server.
CAUTION: It is not presently possible on Solaris to lower I/O stalls on iSCSI attached array paths
due to a Solaris related bug (Bug ID: 6497777). Until a fix is available in Solaris 10 update 9,
the connection timeout is fixed at 180 seconds and cannot be modified.
43
Solaris 11
Solaris 10 (MU5 and later for up to 1 Gb iSCSI; MU9 and later for 10 Gb iSCSI)
Sun iSCSI Device Driver and Utilities Patch 119090-26 (SPARC) or 119091-26 (x86)
5 directories
5 executables
1005 blocks used (approx)
# modinfo | grep iscsi
104 7bee0000 2b7e8 96 1 iscsi (Sun iSCSI Initiator v20071207-0)
Connect the Solaris (iSCSI Initiator) hosts CAT5/Fiber cables and the HP 3PAR Storage System
iSCSI target port's CAT5/Fiber cables to the Ethernet switches.
If you are using VLANs, make sure that the switch ports (where the HP 3PAR Storage System
iSCSI target ports and iSCSI Initiators are connected) are in the same VLANs and/or that you
can route the iSCSI traffic between the iSCSI Initiators and the HP 3PAR Storage System iSCSI
target ports. Once the iSCSI Initiator and HP 3PAR Storage System iSCSI target ports are
configured and connected to the switch, you can use the ping command on the iSCSI Initiator
host to make sure that it sees the HP 3PAR Storage System iSCSI target ports.
NOTE:
Ethernet switch VLANs and routing setup and configuration is beyond the scope of
this document. Consult your switch manufacturer's documentation for instructions of how to
set up VLANs and routing.
2.
SPEED
1000
10000
0
1000
10000
0
DUPLEX
full
full
unknown
full
full
unknown
DEVICE
e1000g1
oce2
e1000g2
e1000g0
oce3
e1000g3
Create the two interfaces required for iSCSI on the host. In the following example, the oce2
and oce3 interfaces are used.
bash-3.00#
bash-3.00#
bash-3.00#
bash-3.00#
3.
STATE
up
up
unknown
up
up
unknown
ipadm
ipadm
ipadm
ipadm
create-ip net2
create-ip net3
create-addr -T static -a 10.100.11.3/24 net2/ipv4
create-addr -T static -a 10.100.12.3/24 net3/ipv4
Check that the iSCSI interfaces are created and configured correctly. For example:
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index
1
inet 127.0.0.1 netmask ff000000
Setting Up the Ethernet Switch
45
4.
Add the IP addresses and a symbolic name for the iSCSI interfaces to the hosts file. For
example:
::1 localhost
127.0.0.1 localhost loghost
10.112.2.174
sunx4250-01
10.100.11.3
net2
10.100.12.3
net3
5.
Identify the IP address and netmask for both iSCSI host server interfaces in the netmasks file.
For example:
# cat/etc/netmasks
## The netmasks file associates Internet Protocol (IP) address
# masks with IP network numbers.
#
# network-number netmask
## The term network-numberrefers to a number otainedfrom the Internet Network
# Information Center.
## Both the network-number and the netmasks arespecified in
# "decimal dot" notation, e.g.:
##
10.112.0.0
255.255.192.0
10.100.11.0
255.255.255.0
10.100.12.0
255.255.255.0
46
2.
Check that the iSCSI interfaces are created and configured correctly.
# ifconfig -al
o0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL>mtu 8232 index 1
inet 127.0.0.1 netnask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.10.205 netmask ffffff00 broadcast 192.168.10.255 ether
0:14:4f:b0:53:4c
bge1: flags-1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.105.1.10 netmask ffffff00 broadcast 10.106.1.255 ether
0:14:4f:b0:53:4d
bge2: flags=1000843<P,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.105.2.10 netmask ffffff00 broadcast 10.106.2.255 ether
0:14:4f:b0:53:4e
3.
Add the IP addresses and a symbolic name for the iSCSI interfaces to the hosts file.
::1
127.0.0.1
192.168.10.206
10.105.1.10
10.105.2.10
4.
localhost
localhost
sqa-sunv245
bge1
bge2
Create the following files for both iSCSI interfaces on the host.
/etc/hostname.bge1 with contents:
10.105.1.10 netmask 255.255.255.0
/etc/hostname.bge2 with contents:
10.105.2.10 netmask 255.255.255.0
5.
Identify the IP address and netmask for both iSCSI host server interfaces in the netmasks file.
bash-3.00# more /etc/netmasks
## The netmasks file associates Internet Protocol (IP) address
# masks with IP network numbers.
#
# network-number netmask
## The term network-numberrefers to a number otainedfrom the Internet Network
# Information Center.
## Both the network-number and the netmasks arespecified in
# "decimal dot" notation, e.g.:
##
128.32.0.0 255.255.255.0
10.105.1.10 255.255.255.0
10.105.2.10 255.255.255.0
47
NOTE: The Solaris OS does not currently support advertisement of the iSNS server address
through DHCP, although support may be added in the future. The Solaris OS does not support
Service Location Protocol (SLP) discovery of the iSNS server address.
The HP 3PAR Storage System supports all of the above discovery methods. For details on iSCSI
initiator configuration, see the System Administration Guide: Devices and File Systems and refer
to the chapter Solaris iSCSI Initiators (Tasks), available on the following website:
http://docs.sun.com
CAUTION: Configuring both static and dynamic device discovery for a given target is not
recommended since it can cause problems communicating with the iSCSI target device.
2.
Define the static target address. Use showport -iscsiname to get the HP 3PAR Storage
System target iSCSi name.
# iscsiadm add static-config
iqn.2000-05.com.3pardata:21310002ac00003e,11.1.0.110:3260
4.
2.
48
4.
Verify that an iSNS server IP address has been configured on the target port using InForm OS
CLI controliscsiport command.
Verify that the target is pingable.
# ping 11.1.0.110
3.
4.
5.
After configuring the discovery method, issue devfsadm the first time to cause the host to log
in to target (HP 3PAR Storage System) and discover it.
# devfsadm -i iscsi
49
Once the target is discovered and configured, any events (e.g., host reboot, HP 3PAR Storage
System node down or HP 3PAR Storage System target reboot), cause the host to automatically
discover the target without the need to issue devfsadm. However, if any change is made in
the target discovery address or method, a devfsadm command must be issued to reconfigure
the altered discovery address.
2.
3.
The Solaris iSCSI initiator sets the Max Receive Data Segment Length target parameter
to a value of 8192 bytes and this variable determines the amount of data the HP 3PAR Storage
System can receive or send to the Solaris host in a single iSCSI PDU. This parameter value
should be changed to 65536 bytes for better I/O throughput and the capability to handle
large I/O blocks. The following command should be used to change the parameter and should
be set on each individual target port.
# iscsiadm modify target-param -p maxrecvdataseglen=65536 <target iqn name>
Example:
a. List the default target settings used by the iSCSI Initiator.
# iscsiadm list target-param -v
Target: iqn.2000-05.com.3pardata:21310002ac00003e
--Login Parameters (Default/Configured):
Max Receive Data Segment Length: 8192/-
b.
c.
Change the value from 8192 to 65536 for all target ports.
# iscsiadm modify target-param -p maxrecvdataseglen=65536
iqn.2000-05.com.3pardata:21310002ac00003e
d.
50
4.
Issue the iscsiadm list target -v command to list all the negotiated login parameters:
# iscsiadm list target -v
Target: iqn.2000-05.com.3pardata:21310002ac00003e
Alias: TPGT: 1
ISID: 4000002a0000
Connections: 1
CID: 0
IP address (Local): 11.1.0.40:33672
IP address (Peer): 11.1.0.110:3260
Discovery Method: SendTargets
Login Parameters (Negotiated):
Data Sequence In Order: yes
Data PDU In Order: yes
Default Time To Retain: 20
Default Time To Wait: 2
Error Recovery Level: 0
First Burst Length: 65536
Immediate Data: no
Initial Ready To Transfer (R2T): yes
Max Burst Length: 262144
Max Outstanding R2T: 1
Max Receive Data Segment Length: 65536
Max Connections: 1
Header Digest: NONE
Data Digest: NONE
5.
(Optional) You can enable CRC32 verification on the datadigest (SCSI data) and headerdigest
(SCSI packet header) of an iSCSI PDU in addition to the default TCP/IP checksum. However,
enabling this verification will cause a small degradation in the I/O throughput.
The following example modifies the datadigest and headerdigest for the initiator:
# iscsiadm modify initiator-node -d CRC32
# iscsiadm modify initiator-node -h CRC32
# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d
Initiator node alias: Login Parameters (Default/Configured):
Header Digest: NONE/CRC32
Data Digest: NONE/CRC32
# iscsiadm list target -v
Target: iqn.2000-05.com.3pardata:20310002ac00003e
Login Parameters (Negotiated):
Header Digest: CRC32
Data Digest: CRC332--
51
2.
For Solaris 10 and 11, make sure that multipathing is enabled in the iSCSI configuration file
/kernel/drv/iscsi.conf; it is enabled by default and should match the following
example:
name="iscsi" parent="/" instance=0;
ddi-forceattach=1;
mpxio-disable="no";
3.
WARNING! If you are using Sun Multipath I/O (Sun StorEdge Traffic Manager), HP advises
that you not reuse a LUN number to export a different HP 3PAR Storage System volume, as
Solaris format output preserves the disk serial number of the first device ever seen on that LUN
number since the last reboot. Any I/O performed on the older disk serial number causes the
I/O to be driven to the new volume and can cause user configuration and data integrity issues.
This is a general Solaris issue with SUN multipath I/O and is not specific to HP 3PAR Storage
System target.
52
2.
3.
Thinly Provisioned
Here is an example:
# createvv -cnt 5 TESTLUNS 5G
53
Consult the InForm Management Console Help and the HP 3PAR Inform OS Command Line Interface
Reference for complete details on creating volumes for the InForm OS version that is being used
on the HP 3PAR Storage System.
These documents are available on the HP BSC website:
http://www.hp.com/go/3par/
NOTE: The commands and options available for creating a virtual volume may vary for earlier
versions of the InForm OS.
Here is an example:
# createa1dvv cnt 5 TESTLUNs 5G
Consult the InForm Management Console Help and the HP 3PAR Inform OS Command Line Interface
Reference for complete details on creating volumes for InForm OS 2.2.3 and earlier.
port presents - created when only the node:slot:port are specified. The VLUN is visible
to any initiator on the specified port.
host set - created when a host set is specified. The VLUN is visible to the initiators of any host
that is a member of the set.
host sees - created when the hostname is specified. The VLUN is visible to the initiators with
any of the hosts WWNs.
matched set - created when both hostname and node:slot:port are specified. The VLUN
is visible to initiators with the hosts WWNs only on the specified port.
You have the option of exporting the LUNs through the InForm Management Console or the InForm
OS CLI.
54
To create a host sees or host set VLUN template, issue the following command:
# createvlun [options] <VV_name | VV_set> <LUN> <host_name/set>
Here is an example:
# createvlun -cnt 5 TESTLUNs.0 0 hostname/hostdefinition
Consult the InForm Management Console Help and the HP 3PAR Inform OS Command Line Interface
Reference for complete details on exporting volumes and available options for the InForm OS
version that is being used on the HP 3PAR Storage System.
These documents are available on the HP BSC website:
http://www.hp.com/go/3par/
NOTE: the commands and options available for creating a virtual volume may vary for earlier
versions of the InForm OS.
Supports the creation of LUNs with LUNs in the range from 0 to 16383.
Support a theoretical quantity of 64K VLUNs (64-bit mode) or 4000 VLUNs (32-bit mode) per
Sun HBA.
Only 256 VLUNs can be exported on each interface. If you export more than 256 VLUNs,
VLUNs with LUNs above 255 will not appear on the host server.
NOTE:
HP 3PAR Storage System with LUNs other than 0 will be discovered even when there is
no VLUN exported with LUN 0. Without a LUN 0, error messages for LUN 0 may appear in /var/
adm/messages as the host server probes for LUN 0. It is recommended that a real LUN 0 be
exported to avoid these errors.
The total number of SCSI devices that Solaris SPARC and x64 servers can reliably discover varies
between operating system versions, architecture, and server configurations. It is possible to export
more VLUNs from the InForm OS (InForm OS VLUN = scsi device on host) than the server can
reliably manage. Contact Oracle for the maximum device capability of your installation. HP tested
up to 256 VVs, each exported as four VLUNs, resulting in the discovery of 1024 SCSI devices by
Solaris, without any issues being noted.
55
Virtual Volumes of 1 terabyte and larger will only be supported using the Sun EFI disk label and
will appear in the output of the Sun format command without cylinder/head geometry. EFI labeled
disks are not currently supported with Veritas Volume Manager 4.0 - 4.1. More information on
EFI disk labels can be found in Sun document 817-0798.
For configurations that use Veritas Volume Manager for multipathing, virtual volumes should be
exported down multiple paths to the host server simultaneously. To do this, create a host definition
on the HP 3PAR Storage System that includes the WWNs of multiple HBA ports on the host server.
NOTE: All I/O to an HP 3PAR Storage System port should be stopped before running any InForm
OS CLI controlport commands on that port. The InForm OS CLI controlport command
executes a reset on the storage server port while it runs and causes the port to log out of and back
onto the fabric. This event will be seen on the host as a "transient device missing" event for each
HP 3PAR Storage System LUN that has been exported on that port. In addition, if any of the
exported volumes are critical to the host server OS (e.g., the host server is booted from that volume),
the host server should be shut down before issuing the InForm OS CLI controlport command.
Even though the HP 3PAR Storage System supports the exportation of VLUNs with LUN numbers
in the range from 0 to 16383, only VLUN creation with a LUN in the range from 0 to 255 is
supported.
This configuration does support sparse LUNs (the skipping of LUN numbers).
Only 256 VLUNs can be exported on each interface. If you export more than 256 VLUNs,
VLUNs with LUNs above 255 will not appear on the Solaris host.
If you are using Sun Multipath I/O (Sun StorEdge Traffic Manager) you should avoid reusing
a LUN number to export a different HP 3PAR Storage System volume as the Solaris format
output preserves the disk serial number of the first device ever seen on that LUN number since
the last reboot.
CAUTION: If any I/O is performed on the old disk serial number, the I/O will be driven to the
new volume and can cause user configuration and data integrity issues. This is a general Solaris
issue with SUN multipath I/O and is not specific to the HP 3PAR Storage System target.
The following is an example:
56
If the HP 3PAR Storage System volume demo.50 that has device serial number
50002AC01188003E is exported to LUN 50 and the format command output shows the correct
HP 3PAR Storage System volume serial number (VV_WWN). LUN number 50 was used for the
first time to present a device.
# showvv -d
Id Name Rd Mstr Prnt Roch Rwch PPrnt PBlkRemain -----VV_WWN--- -----CreationTime---10 demo.50 RW 1/2/3 --- --- --- --- - 50002AC01188003E Fri Aug 18 10:22:57 PDT 2006
20 checkvol RW 1/2/3 --- --- --- --- - 50002AC011A8003E Fri Aug 18 10:22:57 PDT 2006
# showvlun -t
Lun VVname Host ------------Host_WWN/iSCSI_Name------------- Port Type
50 demo.50 solarisiscsi ---------------- --- host
# format
AVAILABLE DISK SELECTIONS:
10 c5t50002AC01188003Ed0 <3PARdata-VV-0000 cyl 213 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac01188003e
On removing demo.50 volume and exporting checkvol at the same LUN number 50, the host
shows the new volume with the serial number of the earlier volume, demo.50
(50002AC01188003E) and not the new volume serial number (50002AC011A8003E).
# showvv -d
# showvlun -t
Lun VVname Host ------------Host_WWN/iSCSI_Name------------- Port Type
50 checkvol solarisiscsi ------------ --- host
# format
AVAILABLE DISK SELECTIONS:
10 c5t50002AC01188003Ed0 <3PARdata-VV-0000 cyl 213 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac01188003e ?? Incorrect device serial number display
CAUTION: Issue devfsadm -C to clear any dangling /dev links and reboot the host to correct
the device serial number or to reuse the LUN number.
Solaris 10 or 11 can support the largest VV that can be created on an HP 3PAR Storage
System at 16 terabytes. VVs of 1 terabyte and larger are only supported using the Sun EFI
disk label and appear in the output of the Sun format command without cylinder/head
geometry.
All I/O to an HP 3PAR Storage System port should be halted before running the InForm OS
CLI controlport command on that port. The InForm OS CLI controlport command
executes a reset on the storage server port while it runs. The reset is done on a per-card basis,
so any port reset (0:3:1) will cause reset on the partner port (0:3:2) and causes the port to
log out and back to a ready state. This event will be seen on the host as a transient device
missing event for each HP 3PAR Storage System LUN that has been exported on that port. In
addition, if any of the exported volumes are critical to the host server OS (e.g., the host server
is booted from that volume), the host server should be shut down before issuing the InForm
OS CLI controlport command.
57
Before they can be used, newly-discovered VLUNs need to be labeled using the Solaris format
or format -e command.
cfgadm
cfgadm
cfgadm
cfgadm
NOTE:
-c
-c
-c
-c
configure
configure
configure
configure
c8
c9
c10
c11
58
NOTE:
The HP 3PAR Storage System targets appear with their World Wide Port Names
associated with the C number of the host HBA port they are logically connected to. The host server
port WWNs will now appear on HP 3PAR Storage System in the output of the showhost command.
NOTE: The configuration will fail for visible targets that do not present any LUNs. At least one
VLUN must be exported from each HP 3PAR Storage System port before its associated host port
is configured. Running cfgadm with the configure option on an HP 3PAR Storage System port
that has no LUNs exported does not harm the system and will display the following error:
# cfgadm -c configure c9
cfgadm: Library error: failed to create device node: 23320002ac000040: Invalid argument
failed to create device node: 23520002ac000040: Invalid argument
failed to configure ANY device on FCA port
NOTE: If VCN on LUN export is not disabled on each HP 3PAR Storage System port that connects
to a host server port, newly exported HP 3PAR Storage System LUNs will result in target offline
and online messages being generated on the host server console and in the /var/adm/messages
file. HP 3PAR recommends disabling VCN on LUN export (as indicated in the HP 3PAR Storage
System Setup section of this document) to prevent these messages and the possible disruption of
I/O to already exported and registered LUNs.
Newly discovered VLUNs need to be labeled using the Solaris format command before they can
be used.
NOTE: The -a option scans all instances of the JNIC14x driver (all host HBA ports). The command
can be limited to specific instances with other options.
Newly discovered VLUNs need to be labeled using the Solaris format command before they can
be used.
59
Additional options can be used with the cfgadm command to display more information about the
HP 3PAR Storage System devices. For instance, issuing cfgadm with the -al option shows
configuration information for each device (or LUN):
# cfgadm -o show_FCP_dev -al
Ap_Id Type Receptacle Occupant Condition
c9::23320002ac000040,2 disk connected configured unknown
Issuing cfgadm with the -alv option shows configuration information and the physical device
path for each device (or LUN):
# cfgadm -o show_FCP_dev -alv
Ap_Id Receptacle Occupant
Condition Information
When
Type
Busy Phys_Id
c9 connected configured
unknown Dec 31 1969
fc-fabric n
/devices/ssm@0,0/pci@1c,600000/
pci@1/SUNW,qlc@4/fp@0,0:fc
c9::23320002ac000040,2 connected configured unknown Dec 31 1969
disk
n
/devices/ssm@0,0/
pci@1c,600000/pci@1/SUNW,qlc@4/
NOTE:
If Sun StorEdge Traffic Manager is enabled, the device nodes for the HP 3PAR Storage
System devices contains a "t" component which matches the HP 3PAR Storage System virtual
volume WWN (as generated by the InForm OS CLI showvv -d command).
The HP 3PAR Storage System port is designed to respond to a SCSI REPORT LUNs command with
one LUN (LUN 0) when there is no real VV exported as LUN 0 and no other VVs exported on any
other LUN, in order to comply with the SCSI spec. A partial indication of LUN 0 will appear in the
output of cfgadm for an HP 3PAR Storage System port that has no VVs exported from it. A real
VV exported as LUN 0 can be distinguished from a non-real LUN 0 as follows:
# cfgadm -o show_FCP_dev -al
Ap_Id Type Receptacle Occupant Condition
c2 fc-fabric connected unconfigured unknown
c3 fc-fabric connected configured unknown
c3::20010002ac00003c,0 disk connected configured unknown
c3::21010002ac00003c,0 unavailable connected configured unusable
HP 3PAR Storage System port 0:0:1 has a real VV exported as LUN 0. HP 3PAR Storage System
port 1:0:1 has no VVs exported, which is indicated by an "unavailable" type and an "unusable"
condition. In fabric mode, new VLUNs that are exported while the host is running will not be
60
registered on the host (they do not appear in the output of the Solaris format command) until the
cfgadm -c configure command is run again:
# cfgadm -c configure c<host port designator>
# cfgadm -c configure c<host port designator>
NOTE: When HP 3PAR Storage System VVs are exported on multiple paths to the Solaris host,
(and Sun StorEdge Traffic Manager is in use for multipath failure and load balancing), each path
(cx) should be configured individually. The cfgadm command will accept multiple "cx" entries in
one invocation but doing so may cause I/O errors to previously exiting LUNs under I/O load. For
a configuration where the HP 3PAR Storage System connects at c4 and c5 on the host, and a new
VV has been exported on those paths, the following commands should be run serially:
# cfgadm -c configure c3
# cfgadm -c configure c4
NOTE: If Sun StorEdge Traffic Manager is enabled for multipathing and a device (HP 3PAR
Storage System VV) is only exported on one path, I/O to that device will be interrupted with an
error if cfgadm -c configure is run on its associated host port. This error will not occur if Sun
StorEdge Traffic Manager is disabled. This situation can be avoided by always preventing multiple
paths to a VV when Sun StorEdge Traffic Manager is enabled. Alternatively, the I/O can be halted
beforecfgadm -c configure is run.
Newly discovered VLUNs need to be labeled using the Solaris format command before they can
be used. If the Solaris host is rebooted while the HP 3PAR Storage System is powered off or
disconnected, all device nodes for the hosts VLUNs will be removed. If the host is subsequently
brought up, the device nodes will not be restored (VLUNs will not appear in the output from the
format command) until the cfgadm -c configure command is run for each host port. This
phenomenon would occur for any fabric attached storage device. To reestablish the connection
to the HP 3PAR Storage System devices, perform the following steps once the host has booted:
1. Run cfgadm -al on the Solaris host.
This allows the HP 3PAR Storage System to see the host HBA ports (showhost) and export
the VLUNs.
2.
After issuing this command, the volume can be admitted to and used by Veritas Volume Manager.
61
You can export new LUNs while the host is serving I/O on existing iSCSI LUNs. If a LUN is exported
to multiple paths on the host, and Solaris multipath I/O is enabled, only one device will be presented
in the format output. The output will be in the form of cXt<VV_WWN>dX, where VV_WWN is the
HP 3PAR Storage System volume ID.
Do not use both static and dynamic device discovery for a given target as it causes problems
communicating with the iSCSI target device.
Use devfsadm -vC to clear the /dev links of non-existing devices.
You can reduce the amount of time the format command takes to display a device or to label a
disk by enabling the no-check variable NOINUSE_CHECK=1. Enabling the no-checking option is
useful if you have a large number of devices being exported.
All iSCSI error messages will be logged in /var/adm/messages.
The iscsiadm list target command lists all the connected target ports, target devices and LUN
numbers that are exported.
# iscsiadm list target -vS
Target: iqn.2000-05.com.3pardata:21310002ac00003e
Alias: TPGT: 131
ISID: 4000002a0000
Connections: 1
CID: 0
IP address (Local): 11.2.0.101:33376
IP address (Peer): 11.2.0.111:3260
Discovery Method: SendTargets
Login Parameters (Negotiated):
Data Sequence In Order: yes
Data PDU In Order: yes
Default Time To Retain: 20
Default Time To Wait: 2
Error Recovery Level: 0
First Burst Length: 65536
Immediate Data: no
Initial Ready To Transfer (R2T): yes
Max Burst Length: 262144
Max Outstanding R2T: 1
Max Receive Data Segment Length: 65536
Max Connections: 1
Header Digest: NONE
Data Digest: NONE
LUN: 1
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c5t50002AC010A8003Ed0s2
LUN: 2
Vendor: 3PARdata
Product: VV
OS Device Name: /dev/rdsk/c5t50002AC010A7003Ed0s2
The iscsiadm command can be used to remove and modify targets and their parameters, as in
the following examples:
# iscsiadm remove discovery-address 10.106.2.12:3260
62
2.
Here is an example:
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. C5t5000c5000AF8554bd0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/scsi vhci/disk@g5000c5000af8554b
1. c5t5000c5000AF8642Fd0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/scsi_vhci/disk@g5000c5000af8642f
2. c5t5000c500077B2307d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/scsi_vhci/disk@g5000c500077b2307
3. c5t5000c5000AC007F100AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f100af
4. c5t5000c5000AC007F200AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f200af
5. c5t5000c5000AC007F300AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f300af
6. c5t5000c5000AC007F400AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f400af
7. c5t5000c5000AC007F500AFd0 <drive not available>
/scsi_vhci/ssd@g50002ac007f500af
8. c5t5000c5000AC007F600AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f600af
9. c5t5000c5000AC007F700AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f700af
10. c5t5000c5000AC007F800AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f800af
11. c5t5000c5000AC007F900AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac007f900af
12. c5t5000c5000AC007F900AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304>
63
/scsi_vhci/ssd@g50002ac007fa00af
Specify disk (enter its number)
iSCSI does not support removal of the last available path to the device if any iSCSI LUNs are
in use (such as in a mounted file system or where associated I/O is being performed) and
generates a logical unit in use error.
Example: There are two paths to the device having a mounted file system.
# iscsiadm list discovery-address
Discovery Address: 11.1.0.110:3260
Discovery Address: 10.1.0.110:3260
# iscsiadm remove discovery-address 11.1.0.110:3260
# iscsiadm remove discovery-address 10.1.0.110:3260
iscsiadm: logical unit in use
iscsiadm: Unable to complete operation
CAUTION: A reboot -r should be performed on the host to properly clean the system
after a VLUN has been removed.
64
Solaris 11
2.
Check that the FCoE NICs are created and configured correctly.
bash-3.00# ifconfig a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index
1
inet 127.0.0.1 netnask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.10.205 netmask ffffff00 broadcast 192.168.10.255
ether 0:14:4f:b0:53:4c
qlge0: flags-1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.105.1.10 netmask ffffff00 broadcast 10.106.1.255
Solaris Host Server Requirements
65
ether 0:14:4f:b0:53:4d
qlge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.105.2.10 netmask ffffff00 broadcast 10.106.2.255
ether 0:14:4f:b0:53:4e
3.
Add the IP addresses and a symbolic name for the FCoE NICs to the hosts file.
::1 localhost
127.0.0.1 localhost
192.168.10.206 sqa-sunv245
10.105.1.10 qlge0
10.105.2.10 qlge1
4.
Create the following files for both FCoE NICs on the host.
/etc/hostname.qlge0
qlge0
/etc/hostname.qlge1
qlge1
5.
Add the IP address and netmask for both FCoE host server NICs in the /etc/netmasks file.
# cat /etc/netmasks
#
# The netmasks file associates Internet Protocol (IP) address
# masks with IP network numbers.
#
#
network-number netmask
#
# The term network-number refers to a number obtained from the Internet Network
# Information Center.
#
# Both the network-number and the netmasks are specified in
# "decimal dot" notation, e.g:
#
#
128.32.0.0 255.255.255.0
#
10.106.1.0
255.255.255.0
10.106.2.0
255.255.255.0
66
67
68
Install the latest boot code and firmware on to the HBAs using the vendor's installation utilities
For a x86 platform, configure the HBAs in the BIOS utility tool and set the boot device order
The following steps explain how to install the Solaris OS image onto a VLUN for subsequent
booting:
69
1.
2.
3.
4.
5.
Connect the host server to the HP 3PAR Storage System either in a direct connect or fabric
configuration.
Create an appropriately sized virtual LUN on the HP 3PAR Storage System for the host server's
OS installation (see Configuring the HP 3PAR Storage System Running InForm OS 3.1.x or
OS 2.3.x (page 9)).
Create the host definition on the HP 3PAR Storage System, which represents the host server's
HBA port WWN.
Export the VLUN to the host server using any LUN number.
Prepare a Solaris OS install server on the same network as the host server, or use the Solaris
OS CD install media.
NOTE: For a Solaris 8 and 9 install image, the required SUN StorEdge SAN software must
also be added to the install server boot image.
6.
For a SPARC host server, use the OpenBoot ok prompt to boot the host from the network or
CD:
ok boot net # if using install server
ok boot cdrom # if using CD
For an x86 host server, use the BIOS network boot option (i.e., the F12 key) to boot the host
from the network or CD.
The host server should boot from the install server or CD and enter the Solaris interactive
installation program. Enter appropriate responses for your installation until you come to the
Select Disks menu. The LUN will be listed as more than one device if multiple paths are used.
The LUN will show as zero size, or you may receive the following warning:
No disks found.
> Check to make sure disks are cabled and
powered up.
Enter F2 to exit to a command prompt.
The LUN needs to be labeled. Exit the installation process to a shell prompt.
NOTE: The No disks found message appears if the HP 3PAR Storage System volume is
the only disk attached to the host or if there are multiple disks attached to the host but none
are labeled. If there are labeled disks that will not be used to install Solaris, a list of disks will
be presented, but the unlabeled HP 3PAR Storage System VLUN will not be selectable as an
install target. In this case, exit and proceed to the next step.
7.
On the host server, issue the format command to label the HP 3PAR Storage System VLUN.
# format
Searching for disks...WARNING:
/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a
(ssd0):
corrupt label - wrong magic number
done
c3t20520002AC000040d10: configured with capacity of 20.00GB
AVAILABLE DISK SELECTIONS:
0. c3t20520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a
Specify disk (enter its number): 0
selecting c3t20520002AC000040d10
[disk formatted]
70
NOTE: If multiple paths to the LUNs have been used, the LUN appears as multiple instances
in the install program
8.
If the HP 3PAR Storage System virtual volume was exported to the host definition, it will now
be exported on both paths to the host server:
# showvlun -a
Lun VVname Host
----Host_WWN---- Port Type
10 san-boot solaris-server 210000E08B049BA2 0:5:2 host
10 san-boot solaris-server 210100E08B275AB5 1:5:2 host
2
4.
Verify that two representations of the boot volume now appear in the Solaris format command:
# devfsadm
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c3t20520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a
1. c5t21520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w21520002ac000040,a
Specify disk (enter its number):
71
5.
6.
Use the Solaris stmsboot command to enable multipathing for the boot device. The host
server will be rebooted when stmsboot e is run.
# stmsboot -e
WARNING: This operation will require a reboot.
Do you want to continue ? [y/n] (default: y) y
The changes will come into effect after rebooting the system.
Reboot the system now ? [y/n] (default: y) y
7.
The stmsboot command makes edits to the /etc/dumpadm.conf and /etc/vfstab files
needed to boot successfully using the new Sun I/O Multipathing single device node for the
multipathed boot device. The new single device node incorporates the HP 3PAR Storage
System VLUN WWN:
Solaris host:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t50002AC000300040d0 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac000300040
Specify disk (enter its number):
8.
For SPARC, the Solaris install process enters a value for boot-device in OpenBoot NVRAM
that represents the hardware path for the first path.
# eeprom
.
.
boot-device=/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/disk@w20520002ac000040,a:a
.
.
72
The hardware path for the second path must be derived and passed to OpenBoot when the
host server needs to boot from the second path. The second path can be deduced and
constructed using the information from the Solaris luxadm display command:
# luxadm display /dev/rdsk/c7t50002AC000300040d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c7t50002AC000300040d0s2
.
.
.
State ONLINE
Controller /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0
Device Address 21520002ac000040,a
Host controller port WWN 210100e08b275ab5
Class primary
State ONLINE
.
9.
For SPARC, create aliases for the alternative hardware paths to the boot-disk. The host server
console must be taken down to the OpenBoot ok prompt:
# init 0
#
INIT: New run level: 0
The system is coming down. Please wait.
System services are now being stopped.
Print services stopped.
May 23 16:51:46 sunb1k-01 syslogd: going down on signal 15
The system is down.
syncing file systems... done
Program terminated
{1} ok
ok nvalias path1 /pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/disk@w20520002ac000040,a:a
ok nvalias path2 /pci@8,700000/SUNW,qlc@3,1/fp@0,0/disk@w21520002ac000040,a:a
SPARC
Set both paths as aliases in the PROM and set the boot-device parameter to both these aliases.
For example:
ok nvalias sanboot1 /pci@1e,600000/pci@0/pci@2/emlx@0/fp@0,0/disk@w20340002ac000120,a
ok nvalias sanboot2 /pci@1e,600000/pci@0/pci@2/emlx@0,1/fp@0,0/disk@w21540002ac000120,a
ok setenv boot-device sanboot1 sanboot2
With these settings and the host server set to auto-boot on power up, the server should boot from
the second path automatically in the event of a failure on the first path.
73
x86
The ability to boot from either path is configured in the BIOS by adding the paths to the boot
priority.
NOTE: The host server in use should be updated to the newest version of OpenBoot available
from Sun and tested for booting under failed path scenarios.
Return to the ok prompt and configure the PROM for emlxs drivers:
ok show-devs
If there are paths that show lpfc, e.g.
/pci@1c,600000/lpfc@1
/pci@1c,600000/lpfc@1,1
they will need to be changed to emlx:
ok setenv auto-boot? false
ok reset-all
ok " /pci@1c,600000/lpfc@1" select-dev
(Note the space after the first double-quote.)
74
ok set-sfs-boot
ok reset-all
ok show-devs
The lpfc@1 path should now be emlx@1.
Repeat for the other path:
ok " /pci@1c,600000/lpfc@1,1" select-dev
ok set-sfs-boot
ok reset-all
ok show-devs
ok setenv auto-boot? true
8.
Create the boot aliases for the boot VLUN. The correct boot paths can be determined as
follows:
ok show-devs and probe-scsi-all
For example:
ok show-devs
/pci@1c,600000/emlx@1/fp@0,0/disk
/pci@1e,600000/emlx@0,1/fp@0,0/disk
From probe-scsi-all there are the devices:
20340002ac000120
21540002ac000120
So the boot paths are:
/pci@1c,600000/emlx@1/fp@0,0/disk@w20340002ac000120,a
/pci@1e,600000/emlx@0,1/fp@0,0/disk@w21540002ac000120,a
9.
You can now install the Solaris OS on the LUN using, for example, Jumpstart. The host should
see the LUN as multiple instances. Select one for OS install.
75
A Configuration Examples
This appendix provides sample configurations used successfully for HP testing purposes.
2.
3.
4.
5.
6.
76
Configuration Examples
on
on
on
on
HBA
HBA
HBA
HBA
instance
instance
instance
instance
0.
1.
2.
3.
The format shows everything back as expected. Only local disks listed.
CAUTION: Always refer to the Driver notes on the effect of issuing rescan on the driver and
already discovered VLUNs.
NOTE: If a new list of LUNs are exported to the host, only the LUNs which were discovered on
the first run are seen. All others not already read by the qlreconfig on the first run are not listed in
format. This is because the /dev/dsk and /dev/rdsk links are not removed.
By default, vxvm saves a backup of all disk groups to /etc/vx/cbr/bk. This can fill up quickly
and take up disk space. The directories inside /etc/vx/cbr/bk can be removed.
77
You create file systems with the newfs command. The newfs command accepts only logical raw
device names. The syntax is as follows:
# newfs [ -v ] [ mkfs-options ] raw-special-device
For example, to create a file system on the disk slice c0t3d0s4, you would use the following
command:
# newfs -v /dev/rdsk/c0t3d0s4
The -v option prints the actions in verbose mode. The newfs command calls the mkfs command
to create a file system. You can invoke the mkfs command directly by specifying a -F option
followed by the type of file system. For example:
# mkfs -F ufs /dev/rdsk/c0t3d0s4 /test_mount
2.
Scan the device tree on the host (other commands are required for different HBA drivers):
# cfgadm -o show_FCP_dev -al
# luxadm probe
# devfsadm
3.
Run the format command on the host, set the LUN type and label it:
# format
78
Configuration Examples
4.
5.
6.
7.
8.
NOTE: For Solaris x86, Auto configure under the type option in format does not
resize the LUN. Resizing can be achieved by selecting other under the type option and
manually entering the new LUN parameters, such as number of cylinders, heads, sectors, etc.
9.
79
Summary:
WARNING!
Always refer to the Veritas release notes before attempting to grow a volume.
2.
Scan the device tree on the host (other commands are required for different HBA drivers):
#
#
#
#
3.
80
Configuration Examples
4.
5.
Rescan the device tree on the host as shown above. Additionally, resize the logical VxVM
object to match the larger LUN size:
# vxdisk -g <disk_group> resize <vx_diskname>
6.
On the host, check there is the additional space in the disk group and grow the volume:
# vxassist -g <disk_group> maxsize
81
Display disks:
# vxdisk list
Managing Enclosures
Display attributes of all enclosures:
# vxdmpadm listenclosure all
82
Configuration Examples
============================================
3PARDATA0 MinimumQ MinimumQ
Setting I/O Policies and Path Attributes
Changing Policies
To change the I/O policy for balancing the I/O load across multiple paths to a disk array or
enclosure:
# vxdmpadm setattr enclosure <enclosure name> iopolicy=policy
Listing Controllers
To list the controllers on a host server, use the vxdmpadm(1m) utility with the listctlr option:
# vxdmpadm listctlr all
CTLR-NAME
ENCLR-TYPE
STATE
ENCLR-NAME
=====================================================
c3
3PARdata
ENABLED
3pardata0
c2
3PARdata
ENABLED
3pardata0
c0
Disk
ENABLED
Disk
The vxdmpadm(1m) utility also has a getctlr option to display the physical device path
associated with a controller:
# vxdmpadm getctlr c2
LNAME
PNAME
===============
c2
/pci@80,2000/lpfc@1
Displaying Paths
To list the paths on a host server, use the vxdmpadm(1m) utility with the getsubpaths option:
# vxdmpadm getsubpaths ctlr=CTLR-NAME
To display paths connected to a LUN, use the vxdmpadm(1m) utility with the getsubpaths
option:
# vxdmpadm getsubpaths dmpnodename=node_name
83
Here is an example:
# vxdmpadm getsubpaths dmpnodename=c2t21d36
NAME
STATE
PATH-TYPE CTLR-NAME
ENCLR-TYPE
ENCLR-NAME
=============================================================================
c2t21d36s2 ENABLED
c2
3PARdata
3pardata0
c2t23d36s2 ENABLED
c2
3PARdata
3pardata0
c3t20d36s2 DISABLED
c3
3PARdata
3pardata0
c3t22d36s2 DISABLED
c3
3PARdata
3pardata0
To display DMP Nodes, use the vxdmpadm(1m) utility with the getdmpnode option:
# vxdmpadm getdmpnode nodename=c3t2d1
Here is an example:
# vxdmpadm getdmpnode nodename=c2t21d36s2
NAME
STATE
ENCLR-TYPE
PATHS
ENBL
DSBL
ENCLR-NAME
=========================================================================
c2t21d36s2
ENABLED 3PARdata
4
2
2
3pardata0
84
Configuration Examples
B Patch/Package Information
This appendix provides minimum patch requirements for various versions of Solaris and other
associated drivers.
x86
118822-20
118844-19
119374-01
119131-09
119130-04
119375-05
120222-01
120223-01
118822-25
118844-26
119130-04
119131-09
118833-17
119375-13
120222-05
120223-05
118833-17
118855-14
119130-04
19131-09
120222-09
120223-09
118833-33
118855-33
119130-04
119131-09
120222-13
120223-13
118833-36
118855-36
120222-21
120223-21
125081-16
125082-16
127127-11
127128-11
118833-36
118855-36
120222-26
120223-26
127127-11
127128-11
118833-36
118855-36
85
x86
127127-11
127128-11
139608-02 (emlxs)
139609-02 (emlxs)
118833-36
118855-36
139606-01 (qlc)
139607-01 (qlc)
127127-11
127128-11
141876-05 (emlxs)
141877-05 (emlxs)
118833-36
118855-36
142084-02 (qlc)
142085-02 (qlc)
MU9 SPARC
MU9 x86
127127-11
127128-11
118833-36
118855-36
144188-02 (emlxs)
144189-02 (emlxs)
145098-01 (emlxs)
145097-01 (emlxs)
145096-01 (emlxs)
145099-01 (emlxs)
120224-08 (emlxs)
120225-08 (emlxs)
119130-33 (qlc)
119131-33 (qlc)
143957-03 (qlc)
143958-03 (qlc)
144486-03 (qlc)
144487-03 (qlc)
119088-11 (qlc)
119089-11 (qlc)
MU10 SPARC
MU10 x86
127127-11
127128-11
118833-36
118855-36
144188-02 (emlxs)
144189-02 (emlxs)
145953-06 (emlxs)
145954-06 (emlxs)
146586-03 (oce)
145097-03 (oce)
120224-08 (emlxs)
120225-08 (emlxs)
119130-33 (qlc)
119131-33 (qlc)
146489-05 (qlc)
146490-05 (qlc)
145648-03 (qlge)
145649-03 (qlge)
119088-11 (qlc)
119089-11 (qlc)
For the Emulex OCe10102 CNA card, the following minimum patch revisions are required (MU9):
145098-04 (emlxs)
145099-04 (emlxs)
For the Qlogic QLE8142 CNA card, the following minimum patch revisions are required (MU9):
143957-05 (qlc)
144486-05 (qlc)
86
Patch/Package Information
143958-05 (qlc)
144487-05 (qlc)
Comment
118558-06
113277-01
114878-02
113040-06
Comment
108974-02
114877-02
111095-14
NOTE:
Features Addressed
111847-08
113039-20
113040-24
fp/fcp/fctl driver
113041-14
fcip driver
113042-18
qlc driver
113043-15
113044-07
114476-09
fcsm driver
114477-04
114478-08
114878-10
JNI driver
119914-12
emlxs driver
Features Addressed
111847-08
113039-20
113040-25
fp/fcp/fctl driver
113041-14
fcip driver
113042-19
qlc driver
113043-15
87
Features Addressed
113044-07
114476-09
fcsm driver
114477-04
114478-08
114878-10
JNI driver
119914-13
emlxs driver
Features Addressed
111847-08
113039-21
113040-26
fp/fcp/fctl driver
113041-14
fcip driver
113042-19
qlc driver
113043-15
113044-07
114476-09
fcsm driver
114477-04
114478-08
114878-10
JNI driver
119914-14
emlxs driver
Feature Addressed
113039-24
113040-27
113042-20
88
Patch ID
Feature Addressed
111847-08
111412-23
111095-32
fp/fcp/fctl driver
111096-17
fcip driver
111097-26
qlc driver
111413-20
111846-10
114475-08
csm driver
Patch/Package Information
Feature Addressed
113766-05
113767-09
114877-10
JJNI driver
119913-12
emlxs driver
Feature Addressed
111846-11
111095-33
fp/fcp/fctl driver
119913-14
emlxs driver
111412-24
111097-27
qlc driver
WARNING!
NOTE: SAN packages are installed on all combinations but they are only enabled for SSTM
combinations.
SPARC Platform
Solaris 9 SAN 4.4.x: QLC driver patch 113042-19 or later (SAN 4.4.14)
Veritas VxVM 4.1MP2_RP3 patch 124358-05 or later (for Solaris 8, 9 and 10)
89
x86 Platform
SPARC Platform
x86 Platform
Leadville Driver
90
Patch/Package Information
qlc
emlxs
qlc
emlxs
qlc
emlxs
qlc
emlxs
qlc
emlxs
qlc
emlxs
qlc
emlxs
2.31p v2008.12.11.10.30
(patches:139608-02)
Leadville Driver
qlc
emlxs
2.31p v2008.12.11.10.30
(patches:139609-02)
qlc
emlxs
emlxs
qlc
emlxs
qlc
emlxs
2.30h v20080116-2.30h
(patches:120222-26)
qlc
emlxs
91
C FCoE-to-FC Connectivity
This appendix provides a basic diagram of FCoE-to-FC connectivity.
Figure 2 FCoE-to-FC Connectivity
92
FCoE-to-FC Connectivity