Item
Description
Multipath I/O and Redundant Controllers

There are several multipath I/O solutions either already available or currently being developed for the Linux environment. SIOS Technology Corp. is actively working with a number of server vendors, storage vendors, adapter vendors and driver maintainers to enable LifeKeeper to work with their multipath I/O solutions. LifeKeeper’s use of SCSI reservations to protect data integrity presents some special requirements that frequently are not met by the initial implementation of these solutions.

Refer to the technical notes below for supported disk arrays to determine if a given array is supported with multiple paths and with a particular multipath solution. Unless an array is specifically listed as being supported by LifeKeeper with multiple paths and with a particular multipath solution, it must be assumed that it is not.

Heavy I/O in Multipath Configurations

In multipath configurations, performing heavy I/O while paths are being manipulated can cause a system to temporarily appear to be unresponsive. When the multipath software moves the access of a LUN from one path to another, it must also move any outstanding I/Os to the new path. The rerouting of the I/Os can cause a delay in the response times for these I/Os. If additional I/Os continue to be issued during this time, they will be queued in the system and can cause a system to run out of memory available to any process. Under very heavy I/O loads, these delays and low memory conditions can cause the system to be unresponsive such that LifeKeeper may detect a server as down and initiate a failover.

There are many factors that will affect the frequency at which this issue may be seen.


  • The speed of the processor will affect how fast I/Os can be queued. A faster processor may cause the failure to be seen more frequently.
  • The amount of system memory will affect how many I/Os can be queued before the system becomes unresponsive. A system with more memory may cause the failure to be seen less frequently.
  • The number of LUNs in use will affect the amount of I/O that can be queued.
  • Characteristics of the I/O activity will affect the volume of I/O queued. In test cases where the problem has been seen, the test was writing an unlimited amount of data to the disk. Most applications will both read and write data. As the reads are blocked waiting on the failover, writes will also be throttled, decreasing the I/O rate such that the failure may be seen less frequently.

For example, during testing of the IBM DS4000 multipath configuration with RDAC, when the I/O throughput to the DS4000 was greater than 190 MB per second and path failures were simulated, LifeKeeper would (falsely) detect a failed server approximately one time out of twelve. The servers used in this test were IBM x345 servers with dual Xeon 2.8GHz processors and 2 GB of memory connected to a DS4400 with 8 volumes (LUNs) per server in use. To avoid the failovers, the LifeKeeper parameter LCMNUMHBEATS (in /etc/default/LifeKeeper) was increased to 16. The change to this parameter results in LifeKeeper waiting approximately 80 seconds before determining that an unresponsive system is dead, rather than the default wait time of approximately 15 seconds.

Special Considerations for Switchovers with Large Storage Configurations

With some large storage configurations (for example, multiple logical volume groups with 10 or more LUNs in each volume group), LifeKeeper may not be able to complete a sendevent within the default timeout of 300 seconds when a failure is detected. This results in the switchover to the backup system failing. All resources are not brought in-service and an error message is logged in the LifeKeeper log.

The recommendation with large storage configurations is to change SCSIERROR from “event” to “halt” in the /etc/default/LifeKeeper file. This will cause LifeKeeper to perform a “halt” on a SCSI error. LifeKeeper will then perform a successful failover to the backup system.

HP 3PAR StoreServ 7200 FC

The HP 3PAR StoreServ 7200 was tested by a SIOS Technology Corp. partner with the following configurations:

HP 3PAR StoreServ 7200 (Firmware (HP 3PAR OS) version 3.1.2) using QLogic QMH2572 8Gb FC HBA for HP BladeSystem c-Class (Firmware version 5.06.02 (90d5)), driver version 8.03.07.05.06.2-k (RHEL bundled) with DMMP (device-mapper-1.02.66-6, device-mapper-multipath-0.4.9-46.el6).

The test was performed with SPS for Linux v8.1.1 using RHEL 6.2 (x86_64).

Note: 3PAR StoreServ 7200 returns a reservation conflict with the default path checker. To avoid this conflict, set the following parameter in “/etc/default/LifeKeeper”:

DMMP_REGISTRATION_TYPE=hba

HP 3PAR StoreServ 7400 FC

The HP 3PAR StoreServ 7400 was tested by a SIOS Technology Corp. partner with the following configurations:

HP 3PAR StoreServ 7400 (Firmware (HP 3PAR OS) version 3.1.2) with HP DL380p Gen8 with Emulex LightPulse Fibre Channel SCSI HBA (driver version 8.3.5.45.4p) with DMMP (device-mapper-1.02.66-6, device-mapper-multipath-0.4.9-46.el6).

The test was performed with LifeKeeper for Linux v8.1.1 using RHEL 6.2 (x86_64).

Note: 3PAR StoreServ 7400 returns a reservation conflict with the default path checker. To avoid this conflict, set the following parameter in /etc/default/LifeKeeper:

DMMP_REGISTRATION_TYPE=hba

And user friendly device mapping are not supported. Set the following parameter in “multipath.conf”

“user_friendly_names no”

HP 3PAR StoreServ 7400 iSCSI (multipath configuration using the DMMP Recovery Kit)

The HP 3PAR StoreServ 7400 was tested by a SIOS Technology Corp. partner with the following configurations:

HP 3PAR StoreServ 7400 (Firmware (HP 3PAR OS) version 3.1.3) using HP Ethernet 10Gb 2-port 560SFP+ Adapter (Networkdriver ixgbe-3.22.0.2) with iSCSI (iscsi-initiator-utils-6.2.0.873-10.el6.x86_64), DMMP (device-mapper-1.02.79-8.el6,device-mapper-multipath-0.4.9-72.el6).

Note: 3PAR StoreServ 7400 iSCSI returns a reservation conflict. To avoid this conflict, set the following parameter in /etc/default/LifeKeeper:

DMMP_REGISTER_IGNORE=TRUE

HP 3PAR StoreServ 7400 iSCSI(using Quorum/Witness Kit)

The HP 3PAR StoreServ 7400 was tested by a SIOS Technology Corp. partner with the following configurations:

iSCSI (iscsi-initiator-utils-6.2.0.872-21.el6.x86_64), DMMP (device-mapper-multipath-0.4.9-41, device-mapper-1.02.62-3), nx_nic v4.0.588.

DMMP with the DMMP Recovery Kit on RHEL 6.1 — must be used with the combination of Quorum/Witness Server Kit and STONITH. To disable SCSI reservation, set RESERVATIONS=none in/etc/default/LifeKeeper. Server must have interface based on IPMI 2.0.

HP 3PAR StoreServ 10800 FC

The HP 3PAR StoreServ 10800 FC was tested by a SIOS Technology Corp. partner with the following configurations:

Firmware (HP 3PAR OS) version 3.1.2 with HP DL380p Gen8 with Emulex LightPulse Fibre Channel HBA (driver version 8.3.5.45.4p) with DMMP (device-mapper-1.02.66-6, device-mapper-multipath-0.4.9-46 el6). The test was performed with SPS for Linux v8.1.2 using RHEL 6.2 (x86_64).

Note: 3PAR StoreServ 10800 FC returns a reservation conflict with the default path checker. To avoid this conflict, set the following parameter in /etc/default/LifeKeeper:

DMMP_REGISTRATION_TYPE=hba

And user friendly device mapping are not supported. Set the following parameter in “multipath.conf”

“user_friendly_names no”

HP MSA1040/2040fc

The HP MSA 2040 Storage FC was tested by a SIOS Technology Corp. partner with the following configurations:

HP MSA 2040 Storage FC (Firmware GL101R002) using HP SN1000Q 16Gb 2P FC HBA QW972A (Firmware version 6.07.02, driver version 8.04.00.12.06.0-k2 (RHEL bundled)) with DMMP (device-mapper-1.02.74-10, device-mapper-multipath-0.4.9-56).

The test was performed with LifeKeeper for Linux v8.1.2 using RHEL 6.3 (X86_64).

HP P9500/XP

Certified by Hewlett-Packard Company using SIOS LifeKeeper for Linux v7.2 or later. Model tested was the HP P9500/XP and has been qualified to work with LifeKeeper on the following:

  • Red Hat Enterprise for 32-bit, x64 (64-bit; Opteron and Intel EMT64)
    RHEL 5.3, RHEL 5.4, RHEL 5.5
  • SuSE Enterprise Server for 32-bit, x64 (64-bit; Opteron and Intel EMT64)
    SLES 10 SP3, SLES 11, SLES 11 SP1
  • Native or Inbox Clustering Solutions RHCS and SLE HA
HP StoreVirtual 4330 iSCSI (multipath configuration using the DMMP Recovery Kit)

The HP StoreVirtual 4330 was tested by a SIOS Technology Corp. partner with the following configurations:

HP StoreVirtual 4330 (Firmware HP LeftHand OS 10.5) using HP Ethernet 1Gb 4-port 331FLR (Networkdriver tg3-3.125g) with iSCSI (iscsi-initiator-utils-6.2.0.872-41.el6.x86_64), DMMP (device-mapper-1.02.74-10.el6,device-mapper-multipath-0.4.9-56.el6).

StoreVirtual (LeftHand) series OS (SAN/iQ) version 11.00 iSCSI (multipath configuration using the DMMP Recovery Kit)

OS (SAN/iQ) version 11.00 is supported in HP StoreVirtual (LeftHand) storage. All StoreVirtual series are supported, including StoreVirtual VSA as the virtual storage appliance. This storage was tested with the following configurations:

StoreVirtual VSA + RHEL 6.4(x86_64) + DMMP

HP StoreVirtual 4730 iSCSI (multipath configuration using the DMMP Recovery Kit)

The HP StoreVirtual 4730 was tested by a SIOS Technology Corp. partner with the following configurations:

HP StoreVirtual 4730 (Firmware HP LeftHand OS 11.5) using HP FlexFabric 10Gb 2-port 536FLB Adapter (Networkdriver bnx2×-1.710.40) with iSCSI (iscsi-initiator-utils-6.2.0.873-10.el6.x86_64), DMMP (device-mapper-1.02.79-8.el6,device-mapper-multipath-0.4.9-72.el6).

HP StoreVirtual LeftHand OS version 11.5 iSCSI (multipath configuration using the DMMP Recovery Kit)

LeftHand OS version 11.5 is supported in HP StoreVirtual (LeftHand) storage. All StoreVirtual series are supported, including StoreVirtual VSA as the virtual storage appliance. This storage was tested with the following configurations:

StoreVirtual 4730(11.5.00.0673.0) + RHEL 6.5(x86_64) + DMMP (device-mapper-1.02.79-8.el6.x86_64, device-mapper-multipath-0.4.9-72.el6.x86_64)

HP StoreVirtual 4330 iSCSI (using Quorum/Witness Kit)

The HP StoreVirtual 4330 was tested by a SIOS Technology Corp. partner with the following configurations:

HP StoreVirtual 4330 (Firmware HP LeftHand OS 10.5) using iSCSI (iscsi-initiator-utils-6.2.0.872-41.el6.x86_64), bonding(version: 3.6.0),tg3(version: 3.125g)

To disable SCSI reservation, set RESERVATIONS=none in “/etc/default/LifeKeeper”.

IBM San Volume Controller (SVC)

Certified by partner testing in a single path configuration. Certified by SIOS Technology Corp. in multipath configurations using the Device Mapper Multipath Recovery Kit.

IBM Storwize V7000 iSCSI

The IBM Storwize V7000 (Firmware Version 6.3.0.1) has been certified by partner testing using iSCSI (iscsi-initiator-utils-6.2.0.872-34.el6.x86_64) with DMMP (device-mapper-1.02.66-6.el6, device-mapper-multipath-0.4.9-46.el6). The test was performed with LifeKeeper for Linux v7.5 using RHEL 6.2.

Restriction: IBM Storwize V7000 must be used in combination with the Quorum/Witness Server Kit and STONITH. Disable SCSI reservation by setting the following in /etc/default/LifeKeeper:

RESERVATIONS=none
IBM Storwize V7000 FC The IBM Storwize V7000 FC has been certified by partner testing in multipath configurations on Red Hat Enterprise Linux Server Release 6.2 (Tikanga), HBA: QLE2562 DMMP: 0.4.9-46.
IBM Storwize V3700 FC The IBM Storwize V3700 FC has been certified by partner testing in multipath configurations on Red Hat Enterprise Linux Server Release 6.5 (Santiago), HBA: QLE2560 DMMP: 0.4.9-72.
IBM XIV Storage System

Certified by partner testing in only multipath configuration on Red Hat Enterprise Linux Release 5.6, HBA: NEC N8190-127 Single CH 4Gbps (Emulex LPe1150 equivalent), XIV Host Attachement Kit: Version 1.7.0.

Note: If you have to create over 32 LUNs on IBM XIV Storage System with LifeKeeper, please contact your IBM sales representative for details.

Dell EqualLogic PS4000/4100/4110/6000/6010/6100/6110/6500/6510 The Dell EqualLogic was tested by a SIOS Technology Corp. partner with the following configurations: Dell EqualLogic PS4000/4100/4110/6000/6010/6100/6110/6500/6510 using DMMP with the DMMP Recovery Kit with RHEL 5.3 with iscsi-initiator-utils-6.2.0.868-0.18.el5. With a large number of luns (over 20), change the REMOTETIMEOUT setting in /etc/default/LifeKeeper to REMOTETIMEOUT=600.

Fujitsu

ETERNUS DX60 / DX80 / DX90 iSCSI

ETERNUS DX60 S2 / DX80 S2 / DX90 S2 iSCSI

ETERNUS DX410 / DX440 iSCSI

ETERNUS DX410 S2 / DX440 S2 iSCSI

ETERNUS DX8100 / DX8400 / DX8700 iSCSI

ETERNUS DX8100 S2/DX8700 S2 iSCSI

ETERNUS DX100 S3/DX200 S3 iSCSI

ETERNUS DX500 S3/DX600 S3 iSCSI

ETERNUS DX200F iSCSI

ETERNUS DX60 S3 iSCSI

When using LifeKeeper DMMP ARK for multipath configuration it is necessary to set the following parameters to /etc/multipath.conf.

prio alua

path_grouping_policy group_by_prio

failback immediate

no_path_retry 10

Path_checker tur

Fujitsu

ETERNUS DX60 / DX80 / DX90

  • FC, single path and multipath configurations

ETERNUS DX60 S2 / DX80 S2 / DX90 S2

  • FC, single path and multipath configurations

ETERNUS DX410 / DX440

  • FC, single path and multipath configurations

ETERNUS DX410 S2 / DX440 S2

  • FC, single path and multipath configurations

ETERNUS DX8100 / DX8400 / DX8700

  • FC, single path and multipath configurations

ETERNUS DX8100 S2/DX8700 S2

  • FC, single path and multipath configurations

ETERNUS DX100 S3/DX200 S3/DX500 S3/DX600S3

  • FC, single path and multipath configurations

ETERNUS DX200F

  • FC, single path and multipath configurations

ETERNUS DX60 S3

  • FC, single path and multipath configurations

ETERNUS DX8700 S3 / DX8900 S3

  • FC, single path and multipath configurations

ETERNUS DX8700 S3 / DX8900 S3

  • iSCSI, single path and multipath configurations

When using LifeKeeper DMMP ARK for multipath configuration it is necessary to set the following parameters to /etc/multipath.conf.

prio alua

path_grouping_policy group_by_prio

failback immediate

no_path_retry 10

Path_checker tur


When using ETERNUS Multipath Driver for multipath configuration, it is no need to set parameters to any configure file.

NEC iStorage M10e iSCSI (Multipath configuration using the SPS Recovery Kit)

The NEC iStorage M10e iSCSI was tested by a SIOS Technology Corp. partner with the following configurations:

NEC iStorage M10e iSCSI + 1GbE NIC + iSCSI (iscsi-initiator-utils-6.2.0.873-10.el6.x86_64),SPS (sps-utils-5.3.0-0.el6,sps-driver-E-5.3.0-2.6.32.431.el6)

NEC iStorage Storage Path Savior Multipath I/O

Protecting Applications and File Systems That Use Multipath Devices: In order for SPS to configure and protect applications or file systems that use SPS devices, the SPS recovery kit must be installed.

Once the SPS kit is installed, simply creating an application hierarchy that uses one or more of the multipath device nodes will automatically incorporate the new resource types provided by the SPS kit.

Multipath Device Nodes: To use the SPS kit, any file systems and raw devices must be mounted or configured on the multipath device nodes (/dev/dd*) rather than on the native /dev/sd* device nodes.

Use of SCSI-3 Persistent Reservations: The SPS kit uses SCSI-3 persistent reservations with a “Write Exclusive” reservation type. This means that devices reserved by one node in the cluster will remain read-accessible to other nodes in the cluster, but those other nodes will be unable to write to the device. Note that this does not mean that you can expect to be able to mount file systems on those other nodes for ongoing read-only access.

LifeKeeper uses the sg_persist utility to issue and monitor persistent reservations. If necessary, LifeKeeper will install the sg_persist(8) utility.

Tested Environment: The SPS kit has been tested and certified with the NEC iStorage disk array using Emulex HBAs and Emulex lpfc driver. This kit is expected to work equally well with other NEC iStorage D, S and M supported by SPS.

[Tested Emulex HBA]

iStorage D-10
=======================
LP952
LP9802
LP1050
LP1150
=======================

iStorage M100
=======================
LPe1150
LPe11002
LPe1250
LPe12002
LPe1105
LPe1205
=======================

Multipath Software Requirements: The SPS kit has been tested with SPS for Linux 3.3.001. There are no known dependencies on the version of the SPS package installed.

Installation Requirements: SPS software must be installed prior to installing the SPS recovery kit.

Adding or Repairing SPS Paths: When LifeKeeper brings an SPS resource into service, it establishes a persistent reservation registered to each path that was active at that time. If new paths are added after the initial reservation, or if failed paths are repaired and SPS automatically reactivates them, those paths will not be registered as a part of the reservation until the next LifeKeeper quickCheck execution for the SPS resource. If SPS allows any writes to that path prior to that point in time, reservation conflicts that occur will be logged to the system message file. The SPS driver will retry these IOs on the registered path resulting in no observable failures to the application. Once quickCheck registers the path, subsequent writes will be successful.

Pure Storage FA-400 Series FC (Multipath configuration using the DMMP Recovery Kit) By partner testing in multipath configuration of FC connection using the DMMP Recovery Kit.
QLogic Drivers For other supported fibre channel arrays with QLogic adapters, use the qla2200 or qla2300 driver, version 6.03.00 or later.
Emulex Drivers For the supported Emulex fibre channel HBAs, you must use the lpfc driver v8.0.16 or later.
Adaptec 29xx Drivers For supported SCSI arrays with Adaptec 29xx adapters, use the aic7xxx driver, version 6.2.0 or later, provided with the OS distribution.

HP Multipath I/O Configurations

Item
Description
Multipath Cluster Installation Using Secure Path

For a fresh installation of a multiple path cluster that uses Secure Path, perform these steps:

  1. Install the OS of choice on each server.
  2. Install the clustering hardware: FCA2214 adapters, storage, switches and cables.
  3. Install the HP Platform Kit.
  4. Install the HP Secure Path software. This will require a reboot of the system. Verify that Secure Path has properly configured both paths to the storage. See Secure Path documentation for further details.
  5. Install LifeKeeper.
Secure Path Persistent Device Nodes

Secure Path supports “persistent” device nodes that are in the form of /dev/spdev/spXX where XX is the device name. These nodes are symbolic links to the specific SCSI device nodes /dev/sdXX. LifeKeeper v4.3.0 or later will recognize these devices as if they were the “normal” SCSI device nodes /dev/sdXX. LifeKeeper maintains its own device name persistence, both across reboots and across cluster nodes, by directly detecting if a device is /dev/sda1 or /dev/sdq1, and then directly using the correct device node.

Note: Support for symbolic links to SCSI device nodes was added in LifeKeeper v4.3.0.

Active/Passive Controllers and Controller Switchovers The MSA1000 implements multipathing by having one controller active with the other controller in standby mode. When there is a problem with either the active controller or the path to the active controller, the standby controller is activated to take over operations. When a controller is activated, it takes some time for the controller to become ready. Depending on the number of LUNs configured on the array, this can take 30 to 90 seconds. During this time, IOs to the storage will be blocked until they can be rerouted to the newly activated controller.
Single Path on Boot Up Does Not Cause Notification If a server can access only a single path to the storage when the system is loaded, there will be no notification of this problem. This can happen if a system is rebooted where there is a physical path failure as noted above, but transient path failures have also been observed. It is advised that any time a system is loaded, the administrator should check that all paths to the storage are properly configured, and if not, take actions to either repair any hardware problems or reload the system to resolve a transient problem.

Hitachi HDLM Multipath I/O Configurations

Item
Description
Protecting Applications and File Systems That Use Multipath Devices

In order for LifeKeeper to configure and protect applications or file systems that use HDLM devices, the HDLM Recovery Kit must be installed.

Once the HDLM Kit is installed, simply creating an application hierarchy that uses one or more of the multipath device nodes will automatically incorporate the new resource types provided by the HDLM Kit.

Multipath Device Nodes To use the HDLM Kit, any file systems and raw devices must be mounted or configured on the multipath device nodes (/dev/sddlm*) rather than on the native /dev/sd* device nodes.
Use of SCSI-3 Persistent Reservations

The HDLM Kit uses SCSI-3 persistent reservations with a “Write Exclusive” reservation type. This means that devices reserved by one node in the cluster will remain read-accessible to other nodes in the cluster, but those other nodes will be unable to write to the device. Note that this does not mean that you can expect to be able to mount file systems on those other nodes for ongoing read-only access.

LifeKeeper uses the sg_persist utility to issue and monitor persistent reservations. If necessary, LifeKeeper will install the sg_persist(8) utility.

Hardware Requirements

The HDLM Kit has been tested and certified with the Hitachi SANRISE AMS1000 disk array using QLogic qla2432 HBAs and 8.02.00-k5-rhel5.2-04 driver and Silkworm3800 FC switch. This kit is expected to work equally well with other Hitachi disk arrays. The HDLM Kit has also been certified with the SANRISE AMS series, SANRISE USP and Hitachi VSP. The HBA and the HBA driver must be supported by HDLM.

BR1200 is certified by Hitachi Data Systems. Both single path and multipath configuration require the RDAC driver. Only the BR1200 configuration using the RDAC driver is supported, and the BR1200 configuration using HDLM (HDLM ARK) is not supported.

Multipath Software Requirements

The HDLM kit has been tested with HDLM for Linux as follows:

05-80, 05-81, 05-90, 05-91, 05-92, 05-93, 05-94, 6.0.0, 6.0.1, 6.1.0, 6.1.1, 6.1.2, 6.2.0, 6.2.1, 6.3.0, 6.4.0, 6.4.1, 6.5.0, 6.5.1, 6.5.2, 6.6.0, 6.6.2, 7.2.0, 7.2.1, 7.3.0, 7.3.1, 7.4.0, 7.4.1, 7.5.0, 7.6.0, 7.6.1, 8.0.0, 8.0.1, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.2.0, 8.2.1, 8.4.0, 8.5.0, 8.5.1, 8.5.2, 8.5.3, 8.6.0, 8.6.1, 8.6.2

There are no known dependencies on the version of the HDLM package installed.

Note: The product name changed to “Hitachi Dynamic Link Manager Software (HDLM)” for HDLM 6.0.0 and later. Versions older than 6.0.0 (05-9x) are named “Hitachi HiCommand Dynamic Link Manager (HDLM)”.

Note: HDLM version 6.2.1 or later is not supported by HDLM Recovery Kit v6.4.0-2. If you need to use this version of HDLM, you can use HDLM Recovery Kit v7.2.0-1 or later with LK Core v7.3 or later.

Note: If using LVM with HDLM, the version supported by HDLM is necessary. Also, a filter setting should be added to /etc/lvm/lvm.conf to ensure that the system does not detect the /dev/sd* corresponding to the /dev/sddlm*. If you need more information, please see “LVM Configuration” in the HDLM manual.

Linux Distribution Requirements

Linux Distribution Requirements

HDLM is supported in the following distributions:

RHEL 4 (AS/ES) (x86 or x86_64) Update 1, 2, 3, 4, Update 4 Security Fix (*2), 4.5,4.5 Security Fix(*4),4.6,4.6 Security Fix(*8),4.7,4.7 Security Fix(*9), 4.8,4.8 Security Fix(*12) (x86/x86_64)(*1)

RHEL 5, 5.1, 5.1 Security Fix(*5), 5.2, 5.2 Security Fix(*6), 5.3, 5.3 Security Fix(*10),5.4 , 5.4 Security Fix(*11), 5.5, 5.5 Security Fix(*13), 5.6, 5.6 Security Fix(*14), 5.7 (x86/x86_64)(*1), 5.8 (x86/x86_64)(*1)

RHEL 6, 6.1, 6.2 , 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 (x86/x86_64)(*1)(*15)

RHEL 7, 7.1, 7.2, 7.3, 7.3, 7.4, 7.5, 7.6 (x86_64)(*1)

(*1) AMD Opteron(Single Core,Dual Core) or Intel EM64T architecture CPU with x86_64 kernel.

(*2) The following kernels are supported
x86:2.6.9-42.0.3.EL,2.6.9-42.0.3.ELsmp,2.6.9-42.0.3.ELhugemem
x86_64:2.6.9-42.0.3.EL,2.6.9-42.0.3.ELsmp,2.6.9-42.0.3.Ellargesmp

(*3) Hitachi does not support RHEL4 U2 environment

(*4) The following kernels are supported
x86:2.6.9-55.0.12.EL,2.6.9-55.0.12.ELsmp,2.6.9-55.0.12.Elhugememx
86_64:2.6.9-55.0.12.EL,2.6.9-55.0.12.ELsmp,2.6.9-55.0.12.Ellargesmp

(*5) The following kernels are supported
x86:2.6.18-53.1.13.el5,2.6.18-53.1.13.el5PAE,2.6.18-53.1.21.el5,2.6.18-53.1.21.el5PAE
x86_64:2.6.18-53.1.13.el5,2.6.18-53.1.21.el5

(*6) The following kernels are supported
x86:2.6.18-92.1.6.el5,2.6.18-92.1.6.el5PAE,2.6.18-92.1.13.el5,2.6.18- 92.1.13.el5PAE,2.6.18-92.1.22.el5,2.6.18-92.1.22.el5PAE
x86_64:2.6.18-92.1.6.el5,2.6.18-92.1.13.el5,2.6.18-92.1.22.el5

(*7) The following kernels are supported
x86:2.6.9-34.0.2.EL,2.6.9-34.0.2.ELsmp,2.6.9-34.0.2.ELhugemem
x86_64:2.6.9-34.0.2.EL,2.6.9-34.0.2.ELsmp,2.6.9-34.0.2.Ellargesmp

(*8) The following kernels are supported
x86:2.6.9-67.0.7.EL,2.6.9-67.0.7.ELsmp,2.6.9-67.0.7.ELhugemem,2.6.9- 67.0.22.EL,2.6.9-67.0.22.ELsmp,2.6.9-67.0.22.ELhugemem
x86_64:2.6.9-67.0.7.EL,2.6.9-67.0.7.ELsmp,2.6.9-67.0.7.ELlargesmp
2.6.9-67.0.22.EL,2.6.9-67.0.22.ELsmp,2.6.9-67.0.22.Ellargesmp

(*9) The following kernels are supported
x86:2.6.9-78.0.1.EL,2.6.9-78.0.1.ELsmp,2.6.9-78.0.1.ELhugemem,2.6.9-78.0.5.EL,2.6.9-78.0.5.ELsmp,2.6.9-78.0.5.ELhugemem,2.6.9-78.0.8.EL,2.6.9-78.0.8.ELsmp,2.6.9-78.0.8.ELhugemem, 2.6.9-78.0.17.EL,2.6.9-78.0.17.ELsmp,2.6.9-78.0.17.ELhugemem,2.6.9-78.0.22.EL,2.6.9-78.0.22.ELsmp,2.6.9-78.0.22.Elhugemem
x86_64:2.6.9-78.0.1.EL,2.6.9-78.0.1.ELsmp,2.6.9-78.0.1.ELlargesmp, 2.6.9-78.0.5.EL,2.6.9-78.0.5.ELsmp,2.6.9-78.0.5.ELlargesmp,2.6.9-78.0.8.EL,2.6.9-78.0.8.ELsmp,2.6.9-78.0.8.ELlargesmp,2.6.9-78.0.17.EL,2.6.9-78.0.17.ELsmp,2.6.9-78.0.17.ELlargesmp,2.6.9-78.0.22.EL,2.6.9-78.0.22.ELsmp,2.6.9-78.0.22.ELlargesmp

(*10) The following kernels are supported
x86:2.6.18-128.1.10.el5,2.6.18-128.1.10.el5PAE,2.6.18-128.1.14.el5, 2.6.18-128.1.14.el5PAE,2.6.18-128.7.1.el5,2.6.18-128.7.1.el5PAE
x86_64:2.6.18-128.1.10.el5,2.6.18-128.1.14.el5

(*11) The following kernels are supported
x86:2.6.18-164.9.1.el5,2.6.18-164.9.1.el5PAE,2.6.18-164.11.1.el5,2.6.18-164.11.1.el5PAE
x86_64:2.6.18-164.9.1.el5,2.6.18-164.11.1.el5

(*12) The following kernels are supported
x86:2.6.9-89.0.20.EL,2.6.9-89.0.20.ELsmp,2.6.9-89.0.20.Elhugemem
x86_64:2.6.9-89.0.20.EL,2.6.9-89.0.20.ELsmp,2.6.9-89.0.20.Ellargesmp

(*13) The following kernels are supported
x86:2.6.18-194.11.1.el5, 2.6.18-194.11.1.el5PAE, 2.6.18-194.11.3.el5, 2.6.18-194.11.3.el5PAE, 2.6.18-194.17.1.el5, 2.6.18-194.17.1.el5PAE, 2.6.18-194.32.1.el5, 2.6.18-194.32.1.el5PAE
x86_64:2.6.18-194.11.1.el5, 2.6.18-194.11.3.el5, 2.6.18-194.17.1.el5, 2.6.18-194.32.1.el5

(*14) The following kernels are supported
x86:2.6.18-238.1.1.el5,2.6.18-238.1.1.el5PAE,2.6.18-238.9.1.el5,2.6.18-238.9.1.el5PAE,2.6.18-238.19.1.el5,2.6.18-238.19.1.el5PAE
x86_64:2.6.18-238.1.1.el5,2.6.18-238.9.1.el5,2.6.18-238.19.1.el5

(*15) The following kernels are supported
x86:2.6.32-71.el6.i686, 2.6.32-131.0.15.el6.i686, 2.6.32-220.el6.i686,2.6.32-279.el6.i686
x86_64:2.6.32-71.el6.x86_64, 2.6.32-131.0.15.el6.x86_64, 2.6.32-220.el6.x86_64,2.6.32-279.el6.x86_64

(*16) The following kernels are supported
x86:2.6.18-274.12.1.el5,2.6.18-274.12.1.el5PAE,2.6.18-274.18.1.el5,2.6.18-274.18.1.el5PAE
x86_64:2.6.18-274.12.1.el5,2.6.18-274.18.1.el5

(*17) The following kernels are supported
x86:2.6.18-308.8.2.el5,2.6.18-308.8.2.el5PAE
x86_64:2.6.18-308.8.2.el5

(*18) The following kernels are supported
x86:2.6.32-220.4.2.el6.i686, 2.6.32-220.17.1.el6.i686, 2.6.32-220.23.1.el6.i686, 2.6.32-220.31.1.el6.i686, 2.6.32-220.45.1.el6.i686, 2.6.32-220.77.1.el6.x86_64
x86_64:2.6.32-220.4.2.el6.x86_64, 2.6.32-220.17.1.el6.x86_64, 2.6.32-220.23.1.el6.x86_64, 2.6.32-220.31.1.el6.x86_64, 2.6.32-220.45.1.el6.x86_64, 2.6.32-220.48.1.el6.x86_64,2.6.32-220.64.1.el6.x86_64,2.6.32-220.65.1.el6.x86_64, 2.6.32-220.72.2.el6.x86_64,2.6.32-220.73.1.el6.x86_64,2.6.32-220.75.1.el6.x86_64, 2.6.32-220.77.1.el6.x86_64

(*19) The following kernels are supported
x86:2.6.32-279.19.1.el6.i686
x86_64:2.6.32-279.19.1.el6.x86_64

(*20) The following kernels are supported
x86:2.6.32-358.6.2.el6.i686, 2.6.32-358.11.1.el6.i686, 2.6.32-358.14.1.el6.i686, 2.6.32-358.23.2.el6.i686
x86_64:2.6.32-358.6.2.el6.x86_64, 2.6.32-358.11.1.el6. x86_64, 2.6.32-358.14.1.el6. x86_64, 2.6.32-358.23.2.el6. x86_64, 2.6.32-358.28.1.el6. x86_64, 2.6.32-358.87.1.el6. x86_64

(*21) The following kernels are supported
x86:2.6.32-431.1.2.el6.i686, 2.6.32-431.3.1.el6.i686, 2.6.32-431.5.1.el6.i686, 2.6.32-431.17.1.el6.i686, 2.6.32-431.20.3.el6.i686, 2.6.32-431.23.3.el6.i686, 2.6.32-431.29.2.el6.i686, 2.6.32-431.72.1.el6.i686,2.6.32-431.77.1.el6.i686
x86_64:2.6.32-431.1.2.el6.x86_64, 2.6.32-431.3.1.el6. x86_64, 2.6.32-431.5.1.el6. x86_64, 2.6.32-431.17.1.el6. x86_64, 2.6.32-431.20.3.el6. x86_64, 2.6.32-431.23.3.el6. x86_64, 2.6.32-431.29.2.el6. x86_64, 2.6.32-431.72.1.el6.x86_64, 2.6.32-431.77.1.el6.x86_64, 2.6.32-431.87.1.el6.x86_64

(*22) The following kernels are supported
x86:2.6.32-504.3.3.el6.i686,2.6.32-504.12.2.el6.i686,2.6.32-504.30.3.el6.i686
x86_64:2.6.32-504.3.3.el6.x86_64,2.6.32-504.12.2.el6.x86_64,2.6.32-504.16.2.el6.x86_64,2.6.32-504.30.3.el6.x86_64,2.6.32-504.40.1.el6.x86_64,2.6.32-504.43.1.el6.x86_64, 2.6.32-504.66.1.el6.x86_64

(*23) The following kernels are supported
x86:2.6.18-348.1.1.el5, 2.6.18-348.1.1.el5PAE, 2.6.18-348.6.1.el5, 2.6.18-348.6.1.el5PAE, 2.6.18-348.18.1.el5, 2.6.18-348.18.1.el5PAE
x86_64:2.6.18-348.1.1.el5, 2.6.18-348.6.1.el5, 2.6.18-348.18.1.el5

(*24) The following kernels are supported
x86_64:3.10.0-123.13.2.el7.x86_64, 3.10.0-123.20.1.el7.x86_64

(*25) The following kernels are supported
x86_64:3.10.0-229.4.2.el7.x86_64,3.10.0-229.20.1.el7.x86_64, 3.10.0-229.34.1.el7.x86_64

(*26) The following kernels are supported
x86_64:3.10.0-327.4.4.el7.x86_64, 3.10.0-327.4.5.el7.x86_64, 3.10.0-327.10.1.el7.x86_64, 3.10.0-327.18.2.el7.x86_64,
3.10.0-327.22.2.el7.x86_64, 3.10.0-327.36.1.el7.x86_64, 3.10.0-327.36.3.el7.x86_64, 3.10.0-327.44.2.el7.x86_64, 3.10.0-327.46.1.el7.x86_64, 3.10.0-327.49.2.el7.x86_64,3.10.0-327.55.2.el7.x86_64,3.10.0-327.55.3.el7.x86_64,3.10.0-327.58.1.el7.x86_64, 3.10.0-327.62.1.el7.x86_64,3.10.0-327.62.4.el7.x86_64, 3.10.0-327.64.1.el7.x86_64

(*27) The following kernels are supported
x86: 2.6.32-573.8.1.el6.i686, 2.6.32-573.12.1.el6.i686, 2.6.32-573.18.1.el6.i686, 2.6.32-573.53.1.el6.i686
x86_64: 2.6.32-573.8.1.el6.x86_64,2.6.32-573.12.1.el6.x86_64, 2.6.32-573.18.1.el6.x86_64, 2.6.32-573.53.1.el6.x86_64

(*28) The following kernels are supported
x86:2.6.32-642.1.1.el6.i686, 2.6.32-642.6.2.el6.i686, 2.6.32-642.13.1.el6.i686
x86_64:2.6.32-642.1.1.el6.x86_64,2.6.32-642.6.1.el6.x86_642.6.32-642.6.2.el6.x86_64,2.6.32-642.13.1.el6.x86_64,2.6.32-642.15.1.el6.x86_64

(*29) The following kernels are supported
x86:2.6.18-416.el5,2.6.18-416.el5PAE, 2.6.18-419.el5,2.6.18-419.el5PAE, 2.6.18-426.el5,2.6.18-426.el5PAE
x86_64:2.6.18-416.el5, 2.6.18-419.el5, 2.6.18-426.el5

(*30)The following kernels are supported
x86_64:3.10.0-514.6.1.el7.x86_64,3.10.0-514.10.2.el7.x86_64,3.10.0-514.16.1.el7.x86_64,3.10.0-514.21.1.el7.x86_64,3.10.0-514.26.2.el7.x86_64,3.10.0-514.36.5.el7.x86_64,3.10.0-514.44.1.el7.x86_64, 3.10.0-514.51.1.el7.x86_64

(*31)The following kernels are supported
x86:2.6.32-696.3.2.el6.i686 2.6.32-696.6.3.el6.i686,2.6.32-696.10.3.el6.i686,2.6.32-696.18.7.el6.i686, 2.6.32-696.20.1.el6.i686,2.6.32-696.23.1.el6.i686
x86_64:2.6.32-696.3.2.el6.x86_64,2.6.32-696.10.3.el6.x86_64,2.6.32-696.18.7.el6.x86_64,2.6.32-696.20.1.el6.x86_64,2.6.32-696.23.1.el6.x86_64 2.6.32-696.23.1.el6.x86_64

(*32)The following kernels are supported
x86_64:3.10.0-693.1.1.el7.x86_64, 3.10.0-693.5.2.el7.x86_64, 3.10.0-693.11.1.el7.x86_64,3.10.0-693.11.6.el7.x86_64, 3.10.0-693.21.1.el7.x86_64, 3.10.0-693.43.1.el7.x86_64

(*33)The following kernels are supported
x86_64 3.10.0-862.3.2.el7.x86_64, 3.10.0-862.14.4.el7.x86_64

(*34)The following kernels are supported
x86:2.6.32-754.3.5.el6.i686,2.6.32-754.15.3.el6.i686 ×86_64:
2.6.32-754.3.5.el6.x86_64,2.6.32-754.15.3.el6.x86_64

(*35)The following kernels are supported
x86_64:3.10.0-957.10.1.el7.x86_64,3.10.0-957.12.2.el7.x86_64

Installation Requirements HDLM software must be installed prior to installing the HDLM recovery kit. Also, customers wanting to transfer their environment from SCSI devices to HDLM devices must run the Installation setup script after configuring the HDLM environment. Otherwise, sg3_utils will not be installed.
Adding or Repairing HDLM Paths When LifeKeeper brings an HDLM resource into service, it establishes a persistent reservation registered to each path that was active at that time. If new paths are added after the initial reservation, or if failed paths are repaired and HDLM automatically reactivates them, those paths will not be registered as a part of the reservation until the next LifeKeeper quickCheck execution for the HDLM resource. If HDLM allows any writes to that path prior to that point in time, reservation conflicts that occur will be logged to the system message file. The HDLM driver will retry these IOs on the registered path resulting in no observable failures to the application. Once quickCheck registers the path, subsequent writes will be successful. The status will be changed to “Offline(E)” if quickCheck detects a reservation conflict. If the status is “Offline(E)”, customers will need to manually change the status to “Online” using the online HDLM command.
Additional settings for RHEL7.x If Red Hat Enterprise Linux 7.0 or later are used with HDLM Recovery Kit, you must add “HDLM_DLMMGR=.dlmmgr_exe” in /etc/default/LifeKeeper. HDLM_DLMMGR=.dlmmgr_exe




  RHEL7
    7.0 Security Fix(*24) 7.1 Security Fix(*25) 7.2 Security Fix(*26) 7.3 Security Fix(*30) 7.4 Security Fix(*32) 7.5 Security Fix(*33) 7.6 Security Fix(*35)
   x86/x86_64
HDLM 8.0.1 X X          
8.1.0 X X          
8.1.1 X X          
8.1.2 X X          
8.1.3 X X          
8.1.4 X X          
8.2.0 X X          
8.2.1 X X          
8.4.0 X X X        
8.5.0 X X X        
8.5.1 X X X X X    
8.5.2 X X X X X    
8.5.3 X X X X X    
8.5.4 X X X X X    
8.6.0 X X X X X    
8.6.1 X X X X X X  
8.6.2 X X X X X X X

(Note: 3)

8.6.4 X X X X X X X
8.6.5 X X X X X X X
LifeKeeper v9.0.0 (v9.0.0-6488 or later) X X          
v9.0.1(v9.0.1-6492 or later) X X          
v9.0.2(v9.0.2-6513 or later) X X X        
v9.1.0(v9.1.0-6538 or later) X X X        
v9.1.1(v9.1.1-6594 or later) X X X X      
v9.1.2(v9.1.2-6609 or later) X X X X      
v9.2.0(v9.2.0-6629 or later) X X X X X    
v9.2.1(v9.2.1-6653 or later) X X X X X    
v9.2.2(v9.2.2-6679 or later) X X X X X    
v9.3(v9.3.0-6738 or later) X X X X X X  
v9.3.1(v9.3.1-6750 or later) X X X X X X X
v9.3.2(v9.3.2-6863 or later) X X X X X X X
HDLM ARK 9.0.0-6488 X X          
9.0.1-6492 X X          
9.0.2-6513 X X X        
9.1.0-6538 X X X        
9.1.1-6594 X X X X      
9.1.2-6609 X X X X      
9.2.0-6629 X X X X X    
9.2.1-6653 X X X X X    
9.2.2-6679 X X X X X    
9.3.0-6738 X X X X X X  
9.3.1-6750 X X X X X X  
9.3.2-6863 X X X X X X X
X = supported blank = not supported

Note: 1 If you are running the system with LifeKeeper v9.0.x on RHEL7.x, you need to apply the Bug7205’s patch..

Note: 2 The Raw device configuration is not supported on RHEL7/7.1/7.2/7.3/7.4.

Note: 3 Supported with HDLM 8.6.2-02 or later.

Device Mapper Multipath I/O Configurations

Protecting Applications and File Systems That Use Device Mapper Multipath Devices

In order for LifeKeeper to operate with and protect applications or file systems that use Device Mapper Multipath devices, the Device Mapper Multipath (DMMP) Recovery Kit must be installed.

Once the DMMP Kit is installed, simply creating an application hierarchy that uses one or more of the multipath device nodes will automatically incorporate the new resource types provided by the DMMP Kit.

{border:1px solid black}.{border:1px solid black}. Multipath Device Nodes To use the DMMP Kit, any file systems and raw devices must be mounted or configured on the multipath device nodes rather than on the native /dev/sd* device nodes.  The supported multipath device nodes to address the full disk are /dev/dm-#, /dev/mapper/<uuid>, /dev/mapper/<user_friendly_name> and /dev/mpath/<uuid>.  To address the partitions of a disk, use the device nodes for each partition created in the /dev/mapper directory.
Use of SCSI-3 Persistent Reservations

The Device Mapper Multipath Recovery Kit uses SCSI-3 persistent reservations with a “Write Exclusive” reservation type.  This means that devices reserved by one node in the cluster will remain read-accessible to other nodes in the cluster, but those other nodes will be unable to write to the device.  Note that this does not mean that you can expect to be able to mount file systems on those other nodes for ongoing read-only access.

LifeKeeper uses the sg_persist utility to issue and monitor persistent reservations.  If necessary, LifeKeeper will install the sg_persist(8) utility.

SCSI-3 Persistent Reservations must be enabled on a per LUN basis when using EMC Symmetrix (including VMAX) arrays with multipathing software and LifeKeeper. This applies to both DMMP and PowerPath.

Hardware Requirements

The Device Mapper Multipath Kit has been tested by SIOS Technology Corp. with the EMC CLARiiON CX300, the HP EVA 8000, HP MSA1500, HP P2000, the IBM SAN Volume Controller (SVC), the IBM DS8100, the IBM DS6800, the IBM ESS, the DataCore SANsymphony, and the HDS 9980V. Check with your storage vendor to determine their support for Device Mapper Multipath.

Enabling support for the use of reservations on the CX300 and the VNX Series requires that the hardware handler be notified to honor reservations. Set the following parameter in /etc/multipath.conf for this array:

hardware_handler    “3 emc 0 1”

The HP MSA1500 returns a reservation conflict with the default path checker setting (tur).  This will cause the standby node to mark all paths as failed.  To avoid this condition, set the following parameter in /etc/multipath.conf for this array:

path_checker    readsector0

The HP 3PAR F400 returns a reservation conflict with the default path checker. To avoid this conflict, set (add) the following parameter in /etc/default/LifeKeeper for this array:

DMMP_REGISTRATION_TYPE=hba

For the HDS 9980V the following settings are required:



  • • Host mode: 00


  • • System option: 254 (must be enabled; global
HDS setting affecting all servers)

  • • Device emulation:
OPEN-V

Refer to the HDS documentation “Suse Linux Device Mapper Multipath for HDS Storage” or “Red Hat Linux Device Mapper Multipath for HDS Storage” v1.15 or later for details on configuring DMMP for HDS. This documentation also provides a compatible multipath.conf file.

For the EVA storage with firmware version 6 or higher, DMMP Recovery Kit v6.1.2-3 or later is required.  Earlier versions of the DMMP Recovery Kit are supported with the EVA storage with firmware versions prior to version 6.

Multipath Software Requirements

For SUSE, multipath-tools-0.4.5-0.14 or later is required.

For Red Hat, device-mapper-multipath-0.4.5-12.0.RHEL4 or later is required.

It is advised to run the latest set of multipath tools available from the vendor.  The feature content and the stability of this multipath product are improving at a very fast rate.

Linux Distribution Requirements

Some storage vendors such as IBM have not certified DMMP with SLES 11 at this time.

SIOS Technology Corp. is currently investigating reported issues with DMMP, SLES 11, and EMCs CLARiiON and Symmetrix arrays.

Transient path failures

While running IO tests on Device Mapper Multipath devices, it is not uncommon for actions on the SAN, for example, a server rebooting, to cause paths to temporarily be reported as failed.  In most cases, this will simply cause one path to fail leaving other paths to send IOs down resulting in no observable failures other than a small performance impact.  In some cases, multiple paths can be reported as failed leaving no paths working.  This can cause an application, such as a file system or database, to see IO errors.  There has been much improvement in Device Mapper Multipath and the vendor support to eliminate these failures.  However, at times, these can still be seen.  To avoid these situations, consider these actions:


  1. Verify that the multipath configuration is set correctly per the instructions of the disk array vendor.
  2. Check the setting of the “failback” feature.  This feature determines how quickly a path is reactivated after failing and being repaired.  A setting of “immediate” indicates to resume use of a path as soon as it comes back online.  A setting of an integer indicates the number of seconds after a path comes back online to resume using it.  A setting of 10 to 15 generally provides sufficient settle time to avoid thrashing on the SAN.
  3. Check the setting of the “no_path_retry” feature. This feature determines what Device Mapper Multipath should do if all paths fail. We recommend a setting of 10 to 15. This allows some ability to “ride out” temporary events where all paths fail while still providing a reasonable recovery time. The LifeKeeper DMMP kit will monitor IOs to the storage and if they are not responded to within four minutes LifeKeeper will switch the resources to the standby server. NOTE: LifeKeeper does not recommend setting “no_path_retry” to “queue” since this will result in IOs that are not easily killed.  The only mechanism found to kill them is on newer versions of DM, the settings of the device can be changed:

/sbin/dmsetup message -u ‘DMid’ 0 fail_if_no_path

This will temporarily change the setting for no_path_retry to fail causing any outstanding IOs to fail.  However, multipathd can reset no_path_retry to the default at times.  When the setting is changed to fail_if_no_path to flush failed IOs, it should then be reset to its default prior to accessing the device (manually or via LifeKeeper).


If “no_path_retry” is set to “queue” and a failure occurs, LifeKeeper will switch the resources over to the standby server.  However, LifeKeeper will not kill these IOs.  The recommended method to clear these IOs is through a reboot but can also be done by an administrator using the dmsetup command above.  If the IOs are not cleared, then data corruption can occur if/when the resources are taken out of service on the other server thereby releasing the locks and allowing the “old” IOs to be issued.

フィードバック

お役に立ちましたか?

はい いいえ
お役に立ちましたか
理由をお聞かせください
フィードバックありがとうございました

このトピックへフィードバック

送信