How do I configure Device Mapper Multipath on my iSCSI LUNS? (iSCSI 멀티패스에 관련된 내용)
https://access.redhat.com/solutions/16976
How do I configure Device Mapper Multipath on my iSCSI LUNS? - Red Hat Customer Portal
목차
문제
- How do I configure multipath on my iSCSI LUNS?
환경
- Red Hat Enterprise Linux(RHEL) 5.3 or higher
- A properly configured and active iSCSI target
- iscsi-initiator-utils
- device-mapper-multipath
해결
-
This solution requires an active iSCSI target. If you do not have an iSCSI SAN and you wish to setup iSCSI target please refer to: How to setup an iSCSI target using tgtadm in Red Hat Enterprise Linux?
-
Multipathed iSCSI is usually configured in one of two ways:
- Redundant switches connected through an interconnect such as a Cisco Inter-Switch Link(ISL). Both switches are part of the same subnet.
- Multiple switches independant of each other each on a different subnet.
Configuring iSCSI
For configurations where both paths to the iSCSI target travel over different networks-or-subnets
-
Config the first path through one of you network interfaces:
# service iscsid start # chkconfig iscsid on # iscsiadm -m discovery -t st -p <IP of your iSCSI target on network 1> -P 1 # iscsiadm -m discovery -t st -p <IP of your iSCSI target on network 1> -l
-
After logging into the target you should see new SCSI block devices created, verify this by executing fdisk -l
# partprobe # fdisk -l
-
Config the second path through eth1:
# iscsiadm -m discovery -t st -p <IP of your iSCSI target on network 2> -P 1 # iscsiadm -m discovery -t st -p <IP of your iSCSI target on network 2> -l
For configurations where both paths to the iSCSI target travel over the same network and subnet
-
Configure the iSCSI interfaces by creating iSCSI iface bindings for all interfaces and binding by network device name (eth0, alias, vlan name, etc) or MAC address:
# service iscsid start # chkconfig iscsid on # iscsiadm -m iface -I iscsi-eth0 -o new # iscsiadm -m iface -I iscsi-eth0 -o update -n iface.net_ifacename -v eth0 # iscsiadm -m iface -I iscsi-eth1 -o new # iscsiadm -m iface -I iscsi-eth1 -o update -n iface.net_ifacename -v eth1
-
Next, verify your targets are available and log in:
# iscsiadm -m discovery -t st -p <IP of your iSCSI target on network> -I iscsi-eth0 -I iscsi-eth1 -P 1 # iscsiadm -m discovery -t st -p <IP of your iSCSI target on network> -I iscsi-eth0 -I iscsi-eth1 -l
-
After logging into the target you should see new SCSI block devices created, verify this by executing
fdisk -l
:# partprobe # fdisk -l
For more information on configuring iSCSI initiator refer to: How do I configure the iscsi-initiator in Red Hat Enterprise Linux 5?
Each LUN has a different World Wide Identifier (WWID.) Each scsi block device with the same WWID is a different path to the same LUN. To verify the WWIDs perform the following:
# scsi_id -gus /block/sd<my scsi device letter>
Configuring Multipath
After configuring the iSCSI layer Multipath must be configured via /etc/multipath/multipath.conf
. Please note that different SAN vendors will their own recommendations for configuring the multipath.conf file; their recommendations should be used if they are provided. For more information on the specific settings for your NAS please contact you hardware vendor.
-
Make the following changes to /etc/multipath.conf to set up a simple Multipath configuration with default settings:
- Un-comment the "defaults" stanza by removing the hash symbols on the following lines:
defaults { user_friendly_names yes }
- Comment-out the "blacklist" stanza by putting hash symbols on the following lines:
# blacklist { # devnode "*" # }
For more information on device mapper multipath please refer to: Using Device-Mapper Multipath
-
Save the changes to multipath.conf. Start multipath and ensure that it is configured to start at boot time:
# service multipathd start # chkconfig multipathd on
-
After starting the multipath daemon the multipath command can be used to view your multipath devices. Example output is as follows:
mpath0 (1IET_00010001) dm-4 IET,VIRTUAL-DISK [size=10G][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 6:0:0:1 sdf 8:80 [active][ready] \_ 7:0:0:1 sdh 8:112 [active][ready]
-
Using the mpath psuedo-device for the multipathed storage, create a partition and inform the kernel of the change:
# fdisk /dev/mapper/mpath0 # partprobe
-
Use the kpartx command to inform multipath of the new partition:
# kpartx -a /dev/mapper/mapth0
-
Device mapper will then create a new mpath<device number>p<partition number> pseudo device. Example:
/dev/mapper/mapth0p1
-
Create a file system on the multipathed storage and mount it:
# mkfs.ext3 /dev/mapper/mpath0p1 # mount /dev/mapper/mpath0p1 /<my mount point>
-
With the storage mounted begin failover testing. The following is an example of failover testing via a cable-pull on eth1:
- Use the mulitpath command to verify that all paths are up. Example output:
mpath0 (1IET_00010001) dm-4 IET,VIRTUAL-DISK [size=10G][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 6:0:0:1 sdf 8:80 [active][ready] \_ 7:0:0:1 sdh 8:112 [active][ready]
- Pull the cable on eth1. Verify the path is failed with multipath -ll. Example output:
mpath0 (1IET_00010001) dm-4 IET,VIRTUAL-DISK [size=10G][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 6:0:0:1 sdf 8:80 [active][ready] \_ 7:0:0:1 sdh 8:112 [faulty][failed]
-
The final step in the process is tuning failover timing.
- With the default timeouts in /etc/iscsi/iscsi.conf multipath failover takes about 1.5 minutes.
- Some users of multipath and iSCSI want lower timeourts so that I/O doesn't remain queued for long periods of time.
- For more information on lowering multipathed iSCSI failover time refer to How can I improve the failover time of a faulty path when using device-mapper-multipath over iSCSI?