You are on page 1of 13

There are a lot of SAN multipathing solutions on Linux at the moment. Two of them are discussesed in this blog.

The

first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration

options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The

advantage of mdadm multiphating is that it is very easy to configure.

Before using a multipathing solution for a production environment on Linux it is also important to determine if the

used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing

solution on their servers yet.

Device Mapper Multipathing


Procedure for configuring the system with DM-Multipath:

1. Install device-mapper-multipath rpm


2. Edit the multipath.conf configuration file:
o comment out the default blacklist
o change any of the existing defaults as needed
3. Start the multipath daemons
4. Create the multipath device with the multipath

Install Device Mapper Multipath


# rpm -ivh device-mapper-multipath-0.4.7-8.el5.i386.rpm
warning: device-mapper-multipath-0.4.7-8.el5.i386.rpm: Header V3 DSA
signature:
Preparing... ###########################################
[100%]
1:device-mapper-multipath########################################### [100%]

Initial Configuration
Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.

# vim /etc/multipath.conf

#blacklist {
# devnode "*"
#}

defaults {
user_friendly_names yes
path_grouping_policy multibus

}
Load the needed modul and the startup service.

# modprobe dm-multipath
# /etc/init.d/multipathd start
# chkconfig multipathd on
Print out the multipathed device.

# multipath -v2
or
# multipath -v3

Configuration
Configure device type in config file.

# cat /sys/block/sda/device/vendor
HP

# cat /sys/block/sda/device/model
HSV200

# vim /etc/multipath.conf
devices {

device {
vendor "HP"
product "HSV200"
path_grouping_policy multibus
no_path_retry "5"
}
}
Configure multipath device in config file.

# cat /var/lib/multipath/bindings

# Format:
# alias wwid
#
mpath0 3600508b400070aac0000900000080000

# vim /etc/multipath.conf
multipaths {

multipath {
wwid 3600508b400070aac0000900000080000
alias mpath0
path_grouping_policy multibus
path_checker readsector0
path_selector "round-robin 0"
failback "5"
rr_weight priorities
no_path_retry "5"
}
}
Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)

# vim /etc/multipath.conf

devnode_blacklist {

devnode "^cciss!c[0-9]d[0-9]*"
devnode "^vg*"
}
Show Configured Multipaths.

# dmsetup ls --target=multipath
mpath0 (253, 1)

# multipath -ll

mpath0 (3600508b400070aac0000900000080000) dm-1 HP,HSV200


[size=10G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=4][active]
\_ 0:0:0:1 sda 8:0 [active][ready]
\_ 0:0:1:1 sdb 8:16 [active][ready]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ 1:0:1:1 sdd 8:48 [active][ready]

Format and mount Device


Fdisk cannot be used with /dev/mapper/[dev_name] devices. Use fdisk on the underlying disks and execute the

following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for
the partition.
# fdisk /dev/sda

# kpartx -a /dev/mapper/mpath0

# ls /dev/mapper/*
mpath0 mpath0p1

# mkfs.ext3 /dev/mapper/mpath0p1

# mount /dev/mapper/mpath0p1 /mnt/san


After that /dev/mapper/mpath0p1 is the first partition on the multipathed device.

Multipathing with mdadm on Linux


The md multipathing solution is only a failover solution what means that only one path is used at one time and no

load balancing is made.

Start the MD Multipathing Service

# chkconfig mdmpd on

# /etc/init.d/mdmpd start
On the first Node (if it is a shared device)

Make Label on Disk

# fdisk /dev/sda
Disk /dev/sdt: 42.9 GB, 42949672960 bytes
64 heads, 32 sectors/track, 40960 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdt1 1 40960 41943024 fd Linux raid autodetect

# partprobe
Bind multiple paths together

# mdadm --create /dev/md4 --level=multipath --raid-devices=4 /dev/sdq1


/dev/sdr1 /dev/sds1 /dev/sdt1
Get UUID

# mdadm --detail /dev/md4


UUID : b13031b5:64c5868f:1e68b273:cb36724e
Set md configuration in config file
# vim /etc/mdadm.conf

# Multiple Paths to RAC SAN


DEVICE /dev/sd[qrst]1
ARRAY /dev/md4 uuid=b13031b5:64c5868f:1e68b273:cb36724e

# cat /proc/mdstat
On the second Node (Copy the /etc/mdadm.conf from the first node)

# mdadm -As

# cat /proc/mdstat

Restore a failed path


# mdadm /dev/md1 -f /dev/sdt1 -r /dev/sdt1 -a /dev/sdt1

How to find hba and add new lun in redhat


linux

We have following utilities in linux to find the hba related information.


systool
lsscsi
sg_map

Using "systool" we can find the WWN of the HBA's installed in the linux
host.systool is provided by the rpm "sysfsutils-2.1.0-1.el5".
lsscsi is provided by the rpm "lsscsi-0.17-3.el5"
sg_map is provided by the rpm "sg3_utils-1.25-5.el5".

To list the HBA's connected to the system:

[root@unixway ~]# lspci -nn | grep -i hba


91:00.0 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to
PCI Express HBA [1077:2532] (rev 02)
91:00.1 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to
PCI Express HBA [1077:2532] (rev 02)
d1:00.0 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to
PCI Express HBA [1077:2532] (rev 02)
d1:00.1 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to
PCI Express HBA [1077:2532] (rev 02)

or

[root@unixway ~]# ls -lrt /sys/class/fc_host/host*/port_name


-r--r--r-- 1 root root 4096 Nov 14 16:10 /sys/class/fc_host/host9/port_name
-r--r--r-- 1 root root 4096 Nov 14 16:10 /sys/class/fc_host/host8/port_name
-r--r--r-- 1 root root 4096 Nov 14 16:10 /sys/class/fc_host/host7/port_name
-r--r--r-- 1 root root 4096 Nov 14 16:10 /sys/class/fc_host/host10/port_name

As we can see above there are four 8Gigabit FC Adapter available in the
system.Now we need to see which HBA's are connected to the fabric.

We can now see which HBA ports are actually in use. As we can see below only
host10 and host7 port_ids are populated here.Hence they are only connected to
fabric.

[root@unixway ~]# systool -c fc_host -A "port_id"


Class = "fc_host"
Class Device = "host10"
port_id = "0x4bc880"
Device = "host10"
Class Device = "host7"
port_id = "0x4cc880"
Device = "host7"
Class Device = "host8"
port_id = "0x000000"
Device = "host8"
Class Device = "host9"
port_id = "0x000000"
Device = "host9"

This command will display only the HBA's connected to the fabric will be
displayed.So the devices are coming from the targets target10:0:0 (host10)
and target7:0:0 (host7)respectively.

[root@unixway ~]# systool -c fc_transport -v


Class = "fc_transport"
Class Device = "0:0"
Class Device path = "/sys/class/fc_transport/target10:0:0"
node_name = "0x50060e8006cfe852"
port_id = "0x4bbc40"
port_name = "0x50060e8006cfe852"
uevent =
"PHYSDEVPATH=/devices/pci0000:80/0000:80:09.0/0000:d1:00.1
PHYSDEVBUS=pci
PHYSDEVDRIVER=qla2xxx"
Device = "target10:0:0"
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0"
uevent =

Class Device = "0:0"


Class Device path = "/sys/class/fc_transport/target7:0:0"
node_name = "0x50060e8006cfe842"
port_id = "0x4cbc40"
port_name = "0x50060e8006cfe842"
uevent =
"PHYSDEVPATH=/devices/pci0000:80/0000:80:01.0/0000:91:00.0
PHYSDEVBUS=pci
PHYSDEVDRIVER=qla2xxx"
Device = "target7:0:0"
Device path =
"/sys/devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0"
uevent =

Now we can see the WWN names of those target as mentioned below with respect
to each hosts. (host10 and host7).

[root@unixway ~]# systool -c fc_host -A "port_name"


Class = "fc_host"
Class Device = "host10"
port_name = "0x21000024ff2d5a95"
Device = "host10"
Class Device = "host7"
port_name = "0x21000024ff2d5c8e"
Device = "host7"
Class Device = "host8"
port_name = "0x21000024ff2d5c8f"
Device = "host8"
Class Device = "host9"
port_name = "0x21000024ff2d5a94"
Device = "host9"

Also, we can see the port status using the following command.

[root@unixway ~]# systool -c fc_host -A "port_state"


Class = "fc_host"
Class Device = "host10"
port_state = "Online"
Device = "host10"
Class Device = "host7"
port_state = "Online"
Device = "host7"
Class Device = "host8"
port_state = "Online"
Device = "host8"
Class Device = "host9"
port_state = "Online"
Device = "host9"

Now ,(see below output) we could see the luns are coming from the target
7:0:0 and 10:0:0.Otherwise, check using the numberical id's of hba d1:00
and 91:00 (check the previous output of lspci)

[root@unixway ~]# ls -l /sys/block/*/device

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdaa/device ->


../../devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:14

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdab/device ->


../../devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0/7:0:0:15

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdac/device ->


../../devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0/7:0:0:16

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdad/device ->


../../devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0/7:0:0:17

lrwxrwxrwx 1 root root 0 Nov 14 17:10 /sys/block/sda/device ->


../../devices/pci0000:00/0000:00:09.0/0000:41:00.0/host0/target0:2:0/0:2:0:0
lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdae/device ->
../../devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:15

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdaf/device ->


../../devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0/7:0:0:18

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdag/device ->


../../devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:16

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdah/device ->


../../devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0/7:0:0:19

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdai/device ->


../../devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:17

lrwxrwxrwx 1 root root 0 Nov 14 06:30 /sys/block/sdaj/device ->


../../devices/pci0000:80/0000:80:01.0/0000:91:00.0/host7/rport-7:0-
0/target7:0:0/7:0:0:20

output truncate since there are many devices.

Another way to list the devices under target 10:0:0.

[root@unixway ~]# systool -b scsi -v | grep -i target10:0:0|more


Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:0"
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:1"
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:10"
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:100"
Another way to list the devices under target 10:0:0 (in a detailed manner)

[root@unixway ~]# systool -c scsi_disk -v | grep target10:0:0 |more


uevent =
"PHYSDEVPATH=/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:0
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:0"
uevent =
"PHYSDEVPATH=/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:1
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:1"
uevent =
"PHYSDEVPATH=/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:10
Device path =
"/sys/devices/pci0000:80/0000:80:09.0/0000:d1:00.1/host10/rport-10:0-
0/target10:0:0/10:0:0:10"

Another way of displaying the disks and its targets.


lsscsi will display the disks and the targets its coming from.

[root@unixway ~]# lsscsi


[0:2:0:0] disk LSI MR9261-8i 2.90 /dev/sda
[7:0:0:0] disk HITACHI OPEN-V 7005 /dev/sdb
[7:0:0:1] disk HITACHI OPEN-V 7005 /dev/sdd
[7:0:0:2] disk HITACHI OPEN-V 7005 /dev/sdf
[7:0:0:5] disk HITACHI OPEN-V 7005 /dev/sdh
[7:0:0:6] disk HITACHI OPEN-V 7005 /dev/sdj
[7:0:0:7] disk HITACHI OPEN-V 7005 /dev/sdl
[7:0:0:8] disk HITACHI OPEN-V 7005 /dev/sdn
[7:0:0:9] disk HITACHI OPEN-V 7005 /dev/sdp
[10:0:0:100] disk HITACHI OPEN-V 7005 /dev/sdcx
[10:0:0:101] disk HITACHI OPEN-V 7005 /dev/sdcz
[10:0:0:102] disk HITACHI OPEN-V 7005 /dev/sddb
[10:0:0:103] disk HITACHI OPEN-V 7005 /dev/sddd
[10:0:0:104] disk HITACHI OPEN-V 7005 /dev/sddf
[10:0:0:105] disk HITACHI OPEN-V 7005 /dev/sddh
[10:0:0:107] disk HITACHI OPEN-V 7005 /dev/sddj
[10:0:0:108] disk HITACHI OPEN-V 7005 /dev/sddl
[10:0:0:109] disk HITACHI OPEN-V 7005 /dev/sddn

Now we got info that the SAN team allocated new luns. Before scanning for the
luns take the present ouput of multipath –ll and multipath –ll |wc

echo "- - -" > /sys/class/scsi_host/host7/scan


echo "- - -" > /sys/class/scsi_host/host10/scan

Now we can see the newly created luns (based on the date).

[root@unixway ~]# ls -ltr /dev/mapper/mpath* |tail -2


brw-rw---- 1 root disk 253, 65 nov 15 20:33 /dev/mapper/mpath64
brw-rw---- 1 root disk 253, 66 nov 15 20:33 /dev/mapper/mpath65

[root@unixway~]# multipath -ll mpath64


mpath64 (360060e8006cfe8000000cfe800000c18) dm-65 HITACHI,OPEN-V
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 10:0:0:107 sddj 71:16 active ready running
`- 7:0:0:107 sddk 71:32 active ready running

[root@unixway~]# multipath -ll mpath65


mpath65 (360060e8006cfe8000000cfe800000c17) dm-66 HITACHI,OPEN-V
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 7:0:0:108 sddm 71:64 active ready running
`- 10:0:0:108 sddl 71:48 active ready running
Now we can create a file system using the mpath devices listed below.

You might also like