Professional Documents
Culture Documents
The
first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration
options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The
Before using a multipathing solution for a production environment on Linux it is also important to determine if the
used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing
Initial Configuration
Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.
# vim /etc/multipath.conf
#blacklist {
# devnode "*"
#}
defaults {
user_friendly_names yes
path_grouping_policy multibus
}
Load the needed modul and the startup service.
# modprobe dm-multipath
# /etc/init.d/multipathd start
# chkconfig multipathd on
Print out the multipathed device.
# multipath -v2
or
# multipath -v3
Configuration
Configure device type in config file.
# cat /sys/block/sda/device/vendor
HP
# cat /sys/block/sda/device/model
HSV200
# vim /etc/multipath.conf
devices {
device {
vendor "HP"
product "HSV200"
path_grouping_policy multibus
no_path_retry "5"
}
}
Configure multipath device in config file.
# cat /var/lib/multipath/bindings
# Format:
# alias wwid
#
mpath0 3600508b400070aac0000900000080000
# vim /etc/multipath.conf
multipaths {
multipath {
wwid 3600508b400070aac0000900000080000
alias mpath0
path_grouping_policy multibus
path_checker readsector0
path_selector "round-robin 0"
failback "5"
rr_weight priorities
no_path_retry "5"
}
}
Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)
# vim /etc/multipath.conf
devnode_blacklist {
devnode "^cciss!c[0-9]d[0-9]*"
devnode "^vg*"
}
Show Configured Multipaths.
# dmsetup ls --target=multipath
mpath0 (253, 1)
# multipath -ll
following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for
the partition.
# fdisk /dev/sda
# kpartx -a /dev/mapper/mpath0
# ls /dev/mapper/*
mpath0 mpath0p1
# mkfs.ext3 /dev/mapper/mpath0p1
# chkconfig mdmpd on
# /etc/init.d/mdmpd start
On the first Node (if it is a shared device)
# fdisk /dev/sda
Disk /dev/sdt: 42.9 GB, 42949672960 bytes
64 heads, 32 sectors/track, 40960 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
# partprobe
Bind multiple paths together
# cat /proc/mdstat
On the second Node (Copy the /etc/mdadm.conf from the first node)
# mdadm -As
# cat /proc/mdstat
Using "systool" we can find the WWN of the HBA's installed in the linux
host.systool is provided by the rpm "sysfsutils-2.1.0-1.el5".
lsscsi is provided by the rpm "lsscsi-0.17-3.el5"
sg_map is provided by the rpm "sg3_utils-1.25-5.el5".
or
As we can see above there are four 8Gigabit FC Adapter available in the
system.Now we need to see which HBA's are connected to the fabric.
We can now see which HBA ports are actually in use. As we can see below only
host10 and host7 port_ids are populated here.Hence they are only connected to
fabric.
This command will display only the HBA's connected to the fabric will be
displayed.So the devices are coming from the targets target10:0:0 (host10)
and target7:0:0 (host7)respectively.
Now we can see the WWN names of those target as mentioned below with respect
to each hosts. (host10 and host7).
Also, we can see the port status using the following command.
Now ,(see below output) we could see the luns are coming from the target
7:0:0 and 10:0:0.Otherwise, check using the numberical id's of hba d1:00
and 91:00 (check the previous output of lspci)
Now we got info that the SAN team allocated new luns. Before scanning for the
luns take the present ouput of multipath –ll and multipath –ll |wc
Now we can see the newly created luns (based on the date).