The following notes and restrictions apply to this version of the Software RAID Recovery Kit.
Activating Virtual Devices During Boot Up
Virtual devices on shared storage should not be activated during system boot-up.
All virtual devices must be configured with a persistent superblock. The superblock is 4K long and is written in a 64K aligned block that starts at least 64K and less than 128K from the end of the device. This space must be accounted for when planning the size of your virtual device as this space is not usable by an application. Note: MD can now be configured with a bitmap using the “internal” feature. This creates a bitmap in this already required superblock, therefore, no additional space is required or additional LUN or additional file system. The bitmap will not show up in the hierarchy, but will just be “automatically” used. See the manual page for mdadm(8) and md(4) referenced in the Documentation and References section for further details.
The HOMEHOST feature in newer versions of mdadm is not supported by LifeKeeper. If a mirror is configured with HOMEHOST set, LifeKeeper will fail during resource creation.
As shown in Figure 3, the following messages will be displayed:
“The MD device “/dev/md5” is configured with the unsupported “homehost” setting.”
“Recreate the MD device without homehost set.”
Figure 3: Create File System Hierarchy Failure
Recreating the MD Device Without the Homehost Set
In order to recreate the MD device, the “—homehost=’‘” setting will need to be used:
mdadm —create /dev/md5 —level=1 —raid-devices=2 /dev/sde1 /dev/sdf1 —homehost=’‘
RAID Level Support
The supported RAID levels are linear, RAID 1 (mirroring) and RAID 10 (striped mirror).
Spare components are supported as an element of a specific virtual device. A “spare-group” is not supported.
Support for Raw I/O and Entire Disks
While Figure 1 shows a virtual device residing below a file system, it is important to note that the Software RAID Recovery Kit can support raw access to a virtual device when used in conjunction with the LifeKeeper Raw I/O Recovery Kit, and can manage virtual devices that are composed of one or more entire disks (e.g. /dev/sdc) rather than disk partitions (e.g. /dev/sdc1).
Partitioning Virtual Devices
Linux software RAID does not support direct partitioning of a virtual device. There have been several attempts by individuals to add support for partitioning, but the maintainers of the md driver have not accepted this. In place of direct partitioning, the software RAID HowTo referenced in the Documentation and References section above recommends using LVM. Figure 6 shows a hierarchy using LVM.
In this version of the Software RAID Recovery Kit, the parameter “—run” has been removed from the mdadm command used to assemble (start) the mirror. This parameter is needed in some error situations where mdadm is not sure about the state of the components. Due to this uncertainty, the data could become corrupted, so by default, this parameter is no longer used. Where before a forced mirror in-service would be attempted, an error will now be displayed similar to the following:
Tue Apr 27 11:46:02 EDT 2010 restore: BEGIN restore of “md23051” on server “shrek.sc.steeleye.com”
Tue Apr 27 11:46:06 EDT 2010 restore: start: mdadm: failed to add /dev/sdc1 to /dev/md1: Invalid argument
mdadm: /dev/md1 assembled from 0 drives – not enough to start the array
Although not recommended, this parameter can be used by adding it to the LifeKeeper defaults file: MD_ASSEMBLE_OPTIONS=—run (this will then be used for every assemble). It is instead recommended that the logs in the cluster be reviewed to determine which component/leg has the best data and then manually assemble the mirror using mdadm.
Note: On some systems (for example those running RHEL 6 or RHEL 7), there is an AUTO entry in the configuration file (/etc/mdadm.conf) that will automatically start mirrors during boot (example: AUTO +imsm +1.x –all). Since LifeKeeper requires that mirrors not be automatically started, this entry will need to be edited to make sure that LifeKeeper mirrors will not be automatically started during boot. The previous example (AUTO +imsm +1.x –all) is telling the system to automatically start mirrors created using imsm metadata and 1.x metadata minus all others. This entry should be changed to “AUTO -all”, telling the system to automatically start everything “minus” all; therefore, nothing will be automatically started.