Multipath I/O and Redundant Controllers | There are several multipath I/O solutions either already available or currently being developed for the Linux environment. SIOS Technology Corp. is actively working with a number of server vendors, storage vendors, adapter vendors and driver maintainers to enable LifeKeeper to work with their multipath I/O solutions. LifeKeeper’s use of SCSI reservations to protect data integrity presents some special requirements that frequently are not met by the initial implementation of these solutions. Refer to the technical notes below for supported disk arrays to determine if a given array is supported with multiple paths and with a particular multipath solution. Unless an array is specifically listed as being supported by LifeKeeper with multiple paths and with a particular multipath solution, it must be assumed that it is not. |
Heavy I/O in Multipath Configurations | In multipath configurations, performing heavy I/O while paths are being manipulated can cause a system to temporarily appear to be unresponsive. When the multipath software moves the access of a LUN from one path to another, it must also move any outstanding I/Os to the new path. The rerouting of the I/Os can cause a delay in the response times for these I/Os. If additional I/Os continue to be issued during this time, they will be queued in the system and can cause a system to run out of memory available to any process. Under very heavy I/O loads, these delays and low memory conditions can cause the system to be unresponsive such that LifeKeeper may detect a server as down and initiate a failover. There are many factors that will affect the frequency at which this issue may be seen.
For example, during testing of the IBM DS4000 multipath configuration with RDAC, when the I/O throughput to the DS4000 was greater than 190 MB per second and path failures were simulated, LifeKeeper would (falsely) detect a failed server approximately one time out of twelve. The servers used in this test were IBM x345 servers with dual Xeon 2.8GHz processors and 2 GB of memory connected to a DS4400 with 8 volumes (LUNs) per server in use. To avoid the failovers, the LifeKeeper parameter LCMNUMHBEATS (in /etc/default/LifeKeeper) was increased to 16. The change to this parameter results in LifeKeeper waiting approximately 80 seconds before determining that an unresponsive system is dead, rather than the default wait time of approximately 15 seconds. |
Special Considerations for Switchovers with Large Storage Configurations | With some large storage configurations (for example, multiple logical volume groups with 10 or more LUNs in each volume group), LifeKeeper may not be able to complete a sendevent within the default timeout of 300 seconds when a failure is detected. This results in the switchover to the backup system failing. All resources are not brought in-service and an error message is logged in the LifeKeeper log. The recommendation with large storage configurations is to change SCSIERROR from “event” to “halt” in the /etc/default/LifeKeeper file. This will cause LifeKeeper to perform a “halt” on a SCSI error. LifeKeeper will then perform a successful failover to the backup system. |
HP 3PAR StoreServ 7200 FC | The HP 3PAR StoreServ 7200 was tested by a SIOS Technology Corp. partner with the following configurations: HP 3PAR StoreServ 7200 (Firmware (HP 3PAR OS) version 3.1.2) using QLogic QMH2572 8Gb FC HBA for HP BladeSystem c-Class (Firmware version 5.06.02 (90d5)), driver version 8.03.07.05.06.2-k (RHEL bundled) with DMMP (device-mapper-1.02.66-6, device-mapper-multipath-0.4.9-46.el6). The test was performed with SPS for Linux v8.1.1 using RHEL 6.2 (x86_64). Note: 3PAR StoreServ 7200 returns a reservation conflict with the default path checker. To avoid this conflict, set the following parameter in “/etc/default/LifeKeeper”: DMMP_REGISTRATION_TYPE=hba |
HP 3PAR StoreServ 7400 FC | The HP 3PAR StoreServ 7400 was tested by a SIOS Technology Corp. partner with the following configurations: HP 3PAR StoreServ 7400 (Firmware (HP 3PAR OS) version 3.1.2) with HP DL380p Gen8 with Emulex LightPulse Fibre Channel SCSI HBA (driver version 8.3.5.45.4p) with DMMP (device-mapper-1.02.66-6, device-mapper-multipath-0.4.9-46.el6). The test was performed with LifeKeeper for Linux v8.1.1 using RHEL 6.2 (x86_64). Note: 3PAR StoreServ 7400 returns a reservation conflict with the default path checker. To avoid this conflict, set the following parameter in “/etc/default/LifeKeeper”: DMMP_REGISTRATION_TYPE=hba And user friendly device mapping are not supported. Set the following parameter in “multipath.conf” “user_friendly_names no” |
HP 3PAR StoreServ 7400 iSCSI (multipath configuration using the DMMP Recovery Kit) | The HP 3PAR StoreServ 7400 was tested by a SIOS Technology Corp. partner with the following configurations: HP 3PAR StoreServ 7400 (Firmware (HP 3PAR OS) version 3.1.3) using HP Ethernet 10Gb 2-port 560SFP+ Adapter (Networkdriver ixgbe-3.22.0.2) with iSCSI (iscsi-initiator-utils-6.2.0.873-10.el6.x86_64), DMMP (device-mapper-1.02.79-8.el6,device-mapper-multipath-0.4.9-72.el6). Note: 3PAR StoreServ 7400 iSCSI returns a reservation conflict. To avoid this conflict, set the following parameter in “/etc/default/LifeKeeper”: DMMP_REGISTER_IGNORE=TRUE |
HP 3PAR StoreServ 7400 iSCSI(using Quorum/Witness Kit) | The HP 3PAR StoreServ 7400 was tested by a SIOS Technology Corp. partner with the following configurations: iSCSI (iscsi-initiator-utils-6.2.0.872-21.el6.x86_64), DMMP (device-mapper-multipath-0.4.9-41, device-mapper-1.02.62-3), nx_nic v4.0.588. DMMP with the DMMP Recovery Kit on RHEL 6.1 — must be used with the combination of Quorum/Witness Server Kit and STONITH. To disable SCSI reservation, set RESERVATIONS=none in“/etc/default/LifeKeeper”. Server must have interface based on IPMI 2.0. |
HP 3PAR StoreServ 10800 FC | The HP 3PAR StoreServ 10800 FC was tested by a SIOS Technology Corp. partner with the following configurations: Firmware (HP 3PAR OS) version 3.1.2 with HP DL380p Gen8 with Emulex LightPulse Fibre Channel HBA (driver version 8.3.5.45.4p) with DMMP (device-mapper-1.02.66-6, device-mapper-multipath-0.4.9-46 el6). The test was performed with SPS for Linux v8.1.2 using RHEL 6.2 (x86_64). Note: 3PAR StoreServ 10800 FC returns a reservation conflict with the default path checker. To avoid this conflict, set the following parameter in “/etc/default/LifeKeeper”: DMMP_REGISTRATION_TYPE=hba And user friendly device mapping are not supported. Set the following parameter in “multipath.conf” “user_friendly_names no” |
HP MSA1040/2040fc | The HP MSA 2040 Storage FC was tested by a SIOS Technology Corp. partner with the following configurations: HP MSA 2040 Storage FC (Firmware GL101R002) using HP SN1000Q 16Gb 2P FC HBA QW972A (Firmware version 6.07.02, driver version 8.04.00.12.06.0-k2 (RHEL bundled)) with DMMP (device-mapper-1.02.74-10, device-mapper-multipath-0.4.9-56). The test was performed with LifeKeeper for Linux v8.1.2 using RHEL 6.3 (X86_64). |
HP P9500/XP | Certified by Hewlett-Packard Company using SIOS LifeKeeper for Linux v7.2 or later. Model tested was the HP P9500/XP and has been qualified to work with LifeKeeper on the following:
|
HP StoreVirtual 4330 iSCSI (multipath configuration using the DMMP Recovery Kit) | The HP StoreVirtual 4330 was tested by a SIOS Technology Corp. partner with the following configurations: HP StoreVirtual 4330 (Firmware HP LeftHand OS 10.5) using HP Ethernet 1Gb 4-port 331FLR (Networkdriver tg3-3.125g) with iSCSI (iscsi-initiator-utils-6.2.0.872-41.el6.x86_64), DMMP (device-mapper-1.02.74-10.el6,device-mapper-multipath-0.4.9-56.el6). |
StoreVirtual (LeftHand) series OS (SAN/iQ) version 11.00 iSCSI (multipath configuration using the DMMP Recovery Kit) | OS (SAN/iQ) version 11.00 is supported in HP StoreVirtual (LeftHand) storage. All StoreVirtual series are supported, including StoreVirtual VSA as the virtual storage appliance. This storage was tested with the following configurations: StoreVirtual VSA + RHEL 6.4(x86_64) + DMMP |
HP StoreVirtual 4730 iSCSI (multipath configuration using the DMMP Recovery Kit) | The HP StoreVirtual 4730 was tested by a SIOS Technology Corp. partner with the following configurations: HP StoreVirtual 4730 (Firmware HP LeftHand OS 11.5) using HP FlexFabric 10Gb 2-port 536FLB Adapter (Networkdriver bnx2×-1.710.40) with iSCSI (iscsi-initiator-utils-6.2.0.873-10.el6.x86_64), DMMP (device-mapper-1.02.79-8.el6,device-mapper-multipath-0.4.9-72.el6). |
HP StoreVirtual LeftHand OS version 11.5 iSCSI (multipath configuration using the DMMP Recovery Kit) |
LeftHand OS version 11.5 is supported in HP StoreVirtual (LeftHand) storage. All StoreVirtual series are supported, including StoreVirtual VSA as the virtual storage appliance. This storage was tested with the following configurations: StoreVirtual 4730(11.5.00.0673.0) + RHEL 6.5(x86_64) + DMMP (device-mapper-1.02.79-8.el6.x86_64, device-mapper-multipath-0.4.9-72.el6.x86_64) |
HP StoreVirtual 4330 iSCSI (using Quorum/Witness Kit) | The HP StoreVirtual 4330 was tested by a SIOS Technology Corp. partner with the following configurations: HP StoreVirtual 4330 (Firmware HP LeftHand OS 10.5) using iSCSI (iscsi-initiator-utils-6.2.0.872-41.el6.x86_64), bonding(version: 3.6.0),tg3(version: 3.125g) To disable SCSI reservation, set RESERVATIONS=none in “/etc/default/LifeKeeper”. |
IBM San Volume Controller (SVC) | Certified by partner testing in a single path configuration. Certified by SIOS Technology Corp. in multipath configurations using the Device Mapper Multipath Recovery Kit. |
IBM Storwize V7000 iSCSI | The IBM Storwize V7000 (Firmware Version 6.3.0.1) has been certified by partner testing using iSCSI (iscsi-initiator-utils-6.2.0.872-34.el6.x86_64) with DMMP (device-mapper-1.02.66-6.el6, device-mapper-multipath-0.4.9-46.el6). The test was performed with LifeKeeper for Linux v7.5 using RHEL 6.2. Restriction: IBM Storwize V7000 must be used in combination with the Quorum/Witness Server Kit and STONITH. Disable SCSI reservation by setting the following in /etc/default/LifeKeeper:
|
IBM Storwize V7000 FC | The IBM Storwize V7000 FC has been certified by partner testing in multipath configurations on Red Hat Enterprise Linux Server Release 6.2 (Tikanga), HBA: QLE2562 DMMP: 0.4.9-46. |
IBM Storwize V3700 FC | The IBM Storwize V3700 FC has been certified by partner testing in multipath configurations on Red Hat Enterprise Linux Server Release 6.5 (Santiago), HBA: QLE2560 DMMP: 0.4.9-72. |
IBM XIV Storage System | Certified by partner testing in only multipath configuration on Red Hat Enterprise Linux Release 5.6, HBA: NEC N8190-127 Single CH 4Gbps (Emulex LPe1150 equivalent), XIV Host Attachement Kit: Version 1.7.0. Note: If you have to create over 32 LUNs on IBM XIV Storage System with LifeKeeper, please contact your IBM sales representative for details. |
Dell EqualLogic PS4000/4100/4110/6000/6010/6100/6110/6500/6510 | The Dell EqualLogic was tested by a SIOS Technology Corp. partner with the following configurations: Dell EqualLogic PS4000/4100/4110/6000/6010/6100/6110/6500/6510 using DMMP with the DMMP Recovery Kit with RHEL 5.3 with iscsi-initiator-utils-6.2.0.868-0.18.el5. With a large number of luns (over 20), change the REMOTETIMEOUT setting in /etc/default/LifeKeeper to REMOTETIMEOUT=600. |
Fujitsu ETERNUS DX60 S2 / DX80 S2 / DX90 S2 iSCSI ETERNUS DX410 S2 / DX440 S2 iSCSI ETERNUS DX8100 S2/DX8700 S2 iSCSI ETERNUS DX100 S3/DX200 S3 iSCSI ETERNUS DX500 S3/DX600 S3 iSCSI ETERNUS DX200F iSCSI ETERNUS DX60 S3 iSCSI ETERNUS AF250 / AF650 iSCSI ETERNUS DX60 S4 / DX100 S4 / DX200 S4 iSCSI ETERNUS DX500 S4 / DX600 S4 / DX8900S4 iSCSI ETERNUS AF250 S2 / AF650 S2 iSCSI ETERNUS DX60 S5 / DX100 S5 / DX200 S5 iSCSI ETERNUS DX500 S5 / DX600 S5 / DX900S5 iSCSI ETERNUS AF150 S3 / AF250 S3 / AF650 S3 iSCSI |
When using LifeKeeper DMMP ARK for multipath configuration it is necessary to set the following parameters to /etc/multipath.conf. prio alua path_grouping_policy group_by_prio failback immediate no_path_retry 10 Path_checker tur |
Fujitsu ETERNUS DX60 S2 / DX80 S2 / DX90 S2
ETERNUS DX410 S2 / DX440 S2
ETERNUS DX8100 S2/DX8700 S2
ETERNUS DX100 S3/DX200 S3/DX500 S3/DX600S3
ETERNUS DX200F
ETERNUS DX60 S3
ETERNUS DX8700 S3 / DX8900 S3
ETERNUS AF250 / AF650 ETERNUS DX60 S4 / DX100 S4 / DX200 S4 ETERNUS DX500 S4 / DX600 S4 / DX8900S4 ETERNUS AF250 S2 / AF650 S2 ETERNUS DX60 S5 / DX100 S5 / DX200 S5 ETERNUS DX500 S5 / DX600 S5 / DX900S5 ETERNUS AF150 S3 / AF250 S3 / AF650 S3
|
When using LifeKeeper DMMP ARK for multipath configuration it is necessary to set the following parameters to /etc/multipath.conf. prio alua path_grouping_policy group_by_prio failback immediate no_path_retry 10 Path_checker tur When using ETERNUS Multipath Driver for multipath configuration, it is no need to set parameters to any configure file. |
NEC iStorage M10e iSCSI (Multipath configuration using the SPS Recovery Kit) | The NEC iStorage M10e iSCSI was tested by a SIOS Technology Corp. partner with the following configurations: NEC iStorage M10e iSCSI + 1GbE NIC + iSCSI (iscsi-initiator-utils-6.2.0.873-10.el6.x86_64),SPS (sps-utils-5.3.0-0.el6,sps-driver-E-5.3.0-2.6.32.431.el6) |
NEC iStorage Storage Path Savior Multipath I/O | Protecting Applications and File Systems That Use Multipath Devices: In order for SPS to configure and protect applications or file systems that use SPS devices, the SPS recovery kit must be installed. Once the SPS kit is installed, simply creating an application hierarchy that uses one or more of the multipath device nodes will automatically incorporate the new resource types provided by the SPS kit. Multipath Device Nodes: To use the SPS kit, any file systems and raw devices must be mounted or configured on the multipath device nodes (/dev/dd*) rather than on the native /dev/sd* device nodes. Use of SCSI-3 Persistent Reservations: The SPS kit uses SCSI-3 persistent reservations with a “Write Exclusive” reservation type. This means that devices reserved by one node in the cluster will remain read-accessible to other nodes in the cluster, but those other nodes will be unable to write to the device. Note that this does not mean that you can expect to be able to mount file systems on those other nodes for ongoing read-only access. LifeKeeper uses the sg_persist utility to issue and monitor persistent reservations. If necessary, LifeKeeper will install the sg_persist(8) utility. Tested Environment: The SPS kit has been tested and certified with the NEC iStorage disk array using Emulex HBAs and Emulex lpfc driver. This kit is expected to work equally well with other NEC iStorage D, S and M supported by SPS. [Tested Emulex HBA] iStorage D-10 iStorage M100 Multipath Software Requirements: The SPS kit has been tested with SPS for Linux 3.3.001. There are no known dependencies on the version of the SPS package installed. Installation Requirements: SPS software must be installed prior to installing the SPS recovery kit. Adding or Repairing SPS Paths: When LifeKeeper brings an SPS resource into service, it establishes a persistent reservation registered to each path that was active at that time. If new paths are added after the initial reservation, or if failed paths are repaired and SPS automatically reactivates them, those paths will not be registered as a part of the reservation until the next LifeKeeper quickCheck execution for the SPS resource. If SPS allows any writes to that path prior to that point in time, reservation conflicts that occur will be logged to the system message file. The SPS driver will retry these IOs on the registered path resulting in no observable failures to the application. Once quickCheck registers the path, subsequent writes will be successful. |
Pure Storage FA-400 Series FC (Multipath configuration using the DMMP Recovery Kit) | By partner testing in multipath configuration of FC connection using the DMMP Recovery Kit. |
QLogic Drivers | For other supported fibre channel arrays with QLogic adapters, use the qla2200 or qla2300 driver, version 6.03.00 or later. |
Emulex Drivers | For the supported Emulex fibre channel HBAs, you must use the lpfc driver v8.0.16 or later. |
Adaptec 29xx Drivers | For supported SCSI arrays with Adaptec 29xx adapters, use the aic7xxx driver, version 6.2.0 or later, provided with the OS distribution. |
HP Multipath I/O Configurations
Multipath Cluster Installation Using Secure Path | For a fresh installation of a multiple path cluster that uses Secure Path, perform these steps:
|
Secure Path Persistent Device Nodes | Secure Path supports “persistent” device nodes that are in the form of /dev/spdev/spXX where XX is the device name. These nodes are symbolic links to the specific SCSI device nodes /dev/sdXX. LifeKeeper v4.3.0 or later will recognize these devices as if they were the “normal” SCSI device nodes /dev/sdXX. LifeKeeper maintains its own device name persistence, both across reboots and across cluster nodes, by directly detecting if a device is /dev/sda1 or /dev/sdq1, and then directly using the correct device node. Note: Support for symbolic links to SCSI device nodes was added in LifeKeeper v4.3.0. |
Active/Passive Controllers and Controller Switchovers | The MSA1000 implements multipathing by having one controller active with the other controller in standby mode. When there is a problem with either the active controller or the path to the active controller, the standby controller is activated to take over operations. When a controller is activated, it takes some time for the controller to become ready. Depending on the number of LUNs configured on the array, this can take 30 to 90 seconds. During this time, IOs to the storage will be blocked until they can be rerouted to the newly activated controller. |
Single Path on Boot Up Does Not Cause Notification | If a server can access only a single path to the storage when the system is loaded, there will be no notification of this problem. This can happen if a system is rebooted where there is a physical path failure as noted above, but transient path failures have also been observed. It is advised that any time a system is loaded, the administrator should check that all paths to the storage are properly configured, and if not, take actions to either repair any hardware problems or reload the system to resolve a transient problem. |
Hitachi Multipath I/O Configurations
Protecting Applications and File Systems That Use Multipath Devices | In order for LifeKeeper to configure and protect applications or file systems that use devices, the HDLM Recovery Kit must be installed. Once the HDLM Kit is installed, simply creating an application hierarchy that uses one or more of the multipath device nodes will automatically incorporate the new resource types provided by the HDLM Kit. |
Multipath Device Nodes | To use the HDLM Kit, any file systems and raw devices must be mounted or configured on the multipath device nodes (/dev/sddlm*) rather than on the native /dev/sd* device nodes. |
Use of SCSI-3 Persistent Reservations | The HDLM Kit uses SCSI-3 persistent reservations with a “Write Exclusive” reservation type. This means that devices reserved by one node in the cluster will remain read-accessible to other nodes in the cluster, but those other nodes will be unable to write to the device. Note that this does not mean that you can expect to be able to mount file systems on those other nodes for ongoing read-only access. LifeKeeper uses the sg_persist utility to issue and monitor persistent reservations. If necessary, LifeKeeper will install the sg_persist(8) utility. |
Hardware Requirements | The HDLM Kit has been tested and certified with the Hitachi SANRISE AMS1000 disk array using QLogic qla2432 HBAs and 8.02.00-k5-rhel5.2-04 driver and Silkworm3800 FC switch. This kit is expected to work equally well with other Hitachi disk arrays. The HDLM Kit has also been certified with the SANRISE AMS series, SANRISE USP and Hitachi VSP. The HBA and the HBA driver must be supported by HDLM. BR1200 is certified by Hitachi Data Systems. Both single path and multipath configuration require the RDAC driver. Only the BR1200 configuration using the RDAC driver is supported, and the BR1200 configuration using HDLM (HDLM ARK) is not supported. |
Multipath Software Requirements | The HDLM kit has been tested with HDLM for Linux as follows: 05-80, 05-81, 05-90, 05-91, 05-92, 05-93, 05-94, 6.0.0, 6.0.1, 6.1.0, 6.1.1, 6.1.2, 6.2.0, 6.2.1, 6.3.0, 6.4.0, 6.4.1, 6.5.0, 6.5.1, 6.5.2, 6.6.0, 6.6.2, 7.2.0, 7.2.1, 7.3.0, 7.3.1, 7.4.0, 7.4.1, 7.5.0, 7.6.0, 7.6.1, 8.0.0, 8.0.1, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.2.0, 8.2.1, 8.4.0, 8.5.0, 8.5.1, 8.5.2, 8.5.3, 8.6.0, 8.6.1, 8.6.2, 8.6.4, 8.6.5, 8.7.0, 8.7.1, 8.7.2, 8.7.3, 8.7.4, 8.7.6, 8.7.7, 8.7.8, 8.8.0, 8.8.1 There are no known dependencies on the version of the HDLM package installed. Note: The product name changed to “Hitachi Dynamic Link Manager Software (HDLM)” for HDLM 6.0.0 and later. Versions older than 6.0.0 (05-9x) are named “Hitachi Command Dynamic Link Manager (HDLM)”. Note: HDLM version 6.2.1 or later is not supported by HDLM Recovery Kit v6.4.0-2. If you need to use this version of HDLM, you can use HDLM Recovery Kit v7.2.0-1 or later with LK Core v7.3 or later. Note: If using LVM with HDLM, the version supported by HDLM is necessary. Also, a filter setting should be added to /etc/lvm/lvm.conf to ensure that the system does not detect the /dev/sd* corresponding to the /dev/sddlm*. If you need more information, please see “LVM Configuration” in the HDLM manual. |
Linux Distribution Requirements | Linux Distribution Requirements HDLM is supported in the following distributions: RHEL 4 (AS/ES) (x86 or x86_64) Update 1, 2, 3, 4, Update 4 Security Fix (*2), 4.5,4.5 Security Fix(*4),4.6,4.6 Security Fix(*8),4.7,4.7 Security Fix(*9), 4.8,4.8 Security Fix(*12) (x86_64(*1) non-English version) RHEL 5, 5.1, 5.1 Security Fix(*5), 5.2, 5.2 Security Fix(*6), 5.3, 5.3 Security Fix(*10),5.4 , 5.4 Security Fix(*11), 5.5, 5.5 Security Fix(*13), 5.6, 5.6 Security Fix(*14), 5.7 (x86/x86_64(*1) non-English version)、5.8,5.9,5.10,5.11 (x86/x86_64(*1) non-English version) RHEL 6, 6.1, 6.2 , 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10(x86/x86_64(*1) non-English version)(*15) RHEL 7, 7.1, 7.2, 7.3, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9 (x86_64(*1) non-English version) RHEL 8.1, 8.2, 8.3 (x86_64(*1) non-English version) (*1) AMD Opteron(Single Core,Dual Core) or Intel EM64T architecture CPU with x86_64 kernel. (*2) The following kernels are supported (*3) Hitachi does not support RHEL4 U2 environment (*4) The following kernels are supported (*5) The following kernels are supported (*6) The following kernels are supported (*7) The following kernels are supported (*8) The following kernels are supported (*9) The following kernels are supported (*10) The following kernels are supported (*11) The following kernels are supported (*12) The following kernels are supported (*13) The following kernels are supported (*14) The following kernels are supported (*15) The following kernels are supported (*16) The following kernels are supported (*17) The following kernels are supported (*18) The following kernels are supported (*19) The following kernels are supported (*20) The following kernels are supported (*21) The following kernels are supported (*22) The following kernels are supported (*23) The following kernels are supported (*24) The following kernels are supported (*25) The following kernels are supported (*26) The following kernels are supported (*27) The following kernels are supported (*28) The following kernels are supported (*29) The following kernels are supported (*30)The following kernels are supported (*31)The following kernels are supported (*32)The following kernels are supported (*33)The following kernels are supported (*34)The following kernels are supported (*35)The following kernels are supported (*36)The following kernels are supported (*37)The following kernels are supported (*38)The following kernels are supported
x86_64:4.18.0-193.28.1.el8_2.x86_64 (*39) Only iSCSI environments are supported (*40)The following kernels are supported (*41) The following kernels are supported. |
Installation Requirements | HDLM software must be installed prior to installing the HDLM recovery kit. Also, customers wanting to transfer their environment from SCSI devices to HDLM devices must run the Installation setup script after configuring the HDLM environment. Otherwise, sg3_utils will not be installed. |
Adding or Repairing HDLM Paths | When LifeKeeper brings an HDLM resource into service, it establishes a persistent reservation registered to each path that was active at that time. If new paths are added after the initial reservation, or if failed paths are repaired and HDLM automatically reactivates them, those paths will not be registered as a part of the reservation until the next LifeKeeper quickCheck execution for the HDLM resource. If HDLM allows any writes to that path prior to that point in time, reservation conflicts that occur will be logged to the system message file. The HDLM driver will retry these IOs on the registered path resulting in no observable failures to the application. Once quickCheck registers the path, subsequent writes will be successful. The status will be changed to “Offline(E)” if quickCheck detects a reservation conflict. If the status is “Offline(E)”, customers will need to manually change the status to “Online” using the online HDLM command. |
Additional settings for RHEL7.x | If Red Hat Enterprise Linux 7.0 or later are used with HDLM Recovery Kit, you must add “HDLM_DLMMGR=.dlmmgr_exe” in /etc/default/LifeKeeper. HDLM_DLMMGR=.dlmmgr_exe |
RHEL7 | |||||||||||
7.0 Security Fix(*24) | 7.1 Security Fix(*25) | 7.2 Security Fix(*26) | 7.3 Security Fix(*30) | 7.4 Security Fix(*32) | 7.5 Security Fix(*33) | 7.6 Security Fix(*35) | 7.7 Security Fix(*36) | 7.8 Security Fix | 7.9 Security Fix(*41) | ||
x86/x86_64 | |||||||||||
HDLM | 8.0.1 | X | X | ||||||||
8.1.0 | X | X | |||||||||
8.1.1 | X | X | |||||||||
8.1.2 | X | X | |||||||||
8.1.3 | X | X | |||||||||
8.1.4 | X | X | |||||||||
8.2.0 | X | X | |||||||||
8.2.1 | X | X | |||||||||
8.4.0 | X | X | X | ||||||||
8.5.0 | X | X | X | ||||||||
8.5.1 | X | X | X | X | X | ||||||
8.5.2 | X | X | X | X | X | ||||||
8.5.3 | X | X | X | X | X | ||||||
8.5.4 | X | X | X | X | X | ||||||
8.6.0 | X | X | X | X | X | ||||||
8.6.1 | X | X | X | X | X | X | |||||
8.6.2 | X | X | X | X | X | X | X(Note: 3) | ||||
8.6.4 | X | X | X | X | X | X | X | ||||
8.6.5 | X | X | X | X | X | X | X | ||||
8.7.0 | X | X | X | X | X | X | X | X | |||
8.7.1 | X | X | X | X | X | X | X | X | |||
8.7.2 | X | X | X | X | X | X | X | X | |||
8.7.3 | X | X | X | X | X | X | X | X | X | X | |
8.7.4 | X | X | X | X | X | X | X | X | X | X | |
8.7.6 | X | X | X | X | X | X | X | X | X | X | |
8.7.7 | X | X | X | X | X | X | X | X | X | X | |
8.7.8 | X | X | X | X | X | X | X | X | X | X | |
8.8.0 | X | X | X | X | X | X | X | X | X | X | |
8.8.1 | X | X | X | X | X | X | X | X | X | X | |
LifeKeeper | v9.0.0 (v9.0.0-6488 or later) | X | X | ||||||||
v9.0.1 (v9.0.1-6492 or later) | X | X | |||||||||
v9.0.2 (v9.0.2-6513 or later) | X | X | X | ||||||||
v9.1.0 (v9.1.0-6538 or later) | X | X | X | ||||||||
v9.1.1 (v9.1.1-6594 or later) | X | X | X | X | |||||||
v9.1.2 (v9.1.2-6609 or later) | X | X | X | X | |||||||
v9.2.0 (v9.2.0-6629 or later) | X | X | X | X | X | ||||||
v9.2.1 (v9.2.1-6653 or later) | X | X | X | X | X | ||||||
v9.2.2 (v9.2.2-6679 or later) | X | X | X | X | X | ||||||
v9.3 (v9.3.0-6738 or later) | X | X | X | X | X | X | |||||
v9.3.1 (v9.3.1-6750or later) | X | X | X | X | X | X | |||||
v9.3.2 (v9.3.2-6863 or later) | X | X | X | X | X | X | X | ||||
v9.4.0 (v9.4.0-6959 or later) | X | X | X | X | X | X | X | X | |||
v9.4.1 (v9.4.1-6983 or later) | X | X | X | X | X | X | X | X | |||
v9.5.0 (v9.5.0-7075 or later) | X | X | X | X | X | X | X | X | X | ||
v9.5.1 (v9.5.1-7154 or later) | X | X | X | X | X | X | X | X | X | X | |
v9.5.2 (v9.5.2-7301 or later) | X | X | X | X | X | X | X | X | X | X | |
HDLM ARK | 9.0.0-6488 | X | X | ||||||||
9.0.1-6492 | X | X | |||||||||
9.0.2-6513 | X | X | X | ||||||||
9.1.0-6538 | X | X | X | ||||||||
9.1.1-6594 | X | X | X | X | |||||||
9.1.2-6609 | X | X | X | X | |||||||
9.2.0-6629 | X | X | X | X | X | ||||||
9.2.1-6653 | X | X | X | X | X | ||||||
9.2.2-6679 | X | X | X | X | X | ||||||
9.3.0-6738 | X | X | X | X | X | X | |||||
9.3.1-6750 | X | X | X | X | X | X | |||||
9.3.2-6863 | X | X | X | X | X | X | X | ||||
9.4.0-6959 | X | X | X | X | X | X | X | X | |||
9.4.1-6983 | X | X | X | X | X | X | X | X | |||
9.5.0-7075 | X | X | X | X | X | X | X | X | X | ||
9.5.1-7154 | X | X | X | X | X | X | X | X | X | X | |
9.5.2-7301 | X | X | X | X | X | X | X | X | X | X | |
X = supported blank = not supported |
Note: 1 If you are running the system with LifeKeeper v9.0.x on RHEL7/7.1/7.2, you need to apply the Bug7205’s patch.
Note: 2 The Raw device configuration is not supported on RHEL7/7.1/7.2/7.3/7.4/7.5/7.6/7.7/7.8.
Note: 3 Supported with HDLM 8.6.2-02 or later.
OS version / Architecture | |||||||||||||||||||
RHEL8 | |||||||||||||||||||
8.1 Security Fix(*37) | 8.2 Security Fix(*38)(*39) | 8.3 Security Fix(*40) | |||||||||||||||||
x86/x86_64 | |||||||||||||||||||
HDLM | 8.7.2 | X | |||||||||||||||||
8.7.3 | X | ||||||||||||||||||
8.7.4 | X | X | X | ||||||||||||||||
8.7.6 | X | X | X | ||||||||||||||||
8.7.7 | X | X | X | ||||||||||||||||
8.7.8 | X | X | X | ||||||||||||||||
8.8.0 | X | X | X | ||||||||||||||||
LifeKeeper | v9.5.1 (9.5.1-7154 以降) | X | X | X | |||||||||||||||
HDLM ARK | 9.5.1-7154 | X | X | X | |||||||||||||||
X = supported, blank = not supported. |
Note: The Raw device configuration is not supported on RHEL 8.1, RHEL 8.2 and RHEL 8.3.
Device Mapper Multipath I/O Configurations
Protecting Applications and File Systems That Use Device Mapper Multipath Devices | In order for LifeKeeper to operate with and protect applications or file systems that use Device Mapper Multipath devices, the Device Mapper Multipath (DMMP) Recovery Kit must be installed. Once the DMMP Kit is installed, simply creating an application hierarchy that uses one or more of the multipath device nodes will automatically incorporate the new resource types provided by the DMMP Kit. |
Multipath Device Nodes | To use the DMMP Kit, any file systems and raw devices must be mounted or configured on the multipath device nodes rather than on the native /dev/sd* device nodes. The supported multipath device nodes to address the full disk are /dev/dm-#, /dev/mapper/<uuid>, /dev/mapper/<user_friendly_name> and /dev/mpath/<uuid>. To address the partitions of a disk, use the device nodes for each partition created in the /dev/mapper directory. |
Use of SCSI-3 Persistent Reservations | The Device Mapper Multipath Recovery Kit uses SCSI-3 persistent reservations with a “Write Exclusive” reservation type. This means that devices reserved by one node in the cluster will remain read-accessible to other nodes in the cluster, but those other nodes will be unable to write to the device. Note that this does not mean that you can expect to be able to mount file systems on those other nodes for ongoing read-only access. LifeKeeper uses the sg_persist utility to issue and monitor persistent reservations. If necessary, LifeKeeper will install the sg_persist(8) utility. SCSI-3 Persistent Reservations must be enabled on a per LUN basis when using EMC Symmetrix (including VMAX) arrays with multipathing software and LifeKeeper. This applies to both DMMP and PowerPath. |
Hardware Requirements | The Device Mapper Multipath Kit has been tested by SIOS Technology Corp. with the EMC CLARiiON CX300, the HP EVA 8000, HP MSA1500, HP P2000, the IBM SAN Volume Controller (SVC), the IBM DS8100, the IBM DS6800, the IBM ESS, the DataCore SANsymphony, and the HDS 9980V. Check with your storage vendor to determine their support for Device Mapper Multipath. Enabling support for the use of reservations on the CX300 and the VNX Series requires that the hardware handler be notified to honor reservations. Set the following parameter in /etc/multipath.conf for this array: hardware_handler “3 emc 0 1” The HP MSA1500 returns a reservation conflict with the default path checker setting (tur). This will cause the standby node to mark all paths as failed. To avoid this condition, set the following parameter in /etc/multipath.conf for this array: path_checker readsector0 The HP 3PAR F400 returns a reservation conflict with the default path checker. To avoid this conflict, set (add) the following parameter in /etc/default/LifeKeeper for this array: DMMP_REGISTRATION_TYPE=hba For the HDS 9980V the following settings are required: • Host mode: 00 • System option: 254 (must be enabled; global HDS setting affecting all servers) • Device emulation: OPEN-V Refer to the HDS documentation “Suse Linux Device Mapper Multipath for HDS Storage” or “Red Hat Linux Device Mapper Multipath for HDS Storage” v1.15 or later for details on configuring DMMP for HDS. This documentation also provides a compatible multipath.conf file. For the EVA storage with firmware version 6 or higher, DMMP Recovery Kit v6.1.2-3 or later is required. Earlier versions of the DMMP Recovery Kit are supported with the EVA storage with firmware versions prior to version 6. |
Multipath Software Requirements | For SUSE, multipath-tools-0.4.5-0.14 or later is required.
|
Linux Distribution Requirements | Some storage vendors such as IBM have not certified DMMP with SLES 11 at this time.
|
Transient path failures | While running IO tests on Device Mapper Multipath devices, it is not uncommon for actions on the SAN, for example, a server rebooting, to cause paths to temporarily be reported as failed. In most cases, this will simply cause one path to fail leaving other paths to send IOs down resulting in no observable failures other than a small performance impact. In some cases, multiple paths can be reported as failed leaving no paths working. This can cause an application, such as a file system or database, to see IO errors. There has been much improvement in Device Mapper Multipath and the vendor support to eliminate these failures. However, at times, these can still be seen. To avoid these situations, consider these actions:
|
Post your comment on this topic.