When upgrading the OS, make sure the currently installed version of LifeKeeper supports the upgraded version of the OS. If it is not supported, LifeKeeper will need to be upgraded as well provided a version of LifeKeeper has been released that supports the new OS version (See Upgrading LifeKeeper). If no version of LifeKeeper has been released that supports the new OS version you may not be able to upgrade the OS. Refer to the Supported Operating Systems.
Before upgrading the OS, it is recommended that the LifeKeeper configuration be backed up via the lkbackup command.
- When upgrading the cluster, all the resource hierarchies and thus the applications they protect, must be switched from the server to be upgraded to a standby node in the cluster. This can be done manually, or, by setting the LifeKeeper Shutdown Strategy to “Switchover”. By setting the Shutdown Strategy to “Switchover”, the resource hierarchies are switched over to a standby node when LifeKeeper stops or the servers are shut down.
- Stop LifeKeeper.
- Upgrade the OS / Kernel (A message will indicate whether a reboot is necessary)
- Note: After an upgrade, see scenarios below for when a reboot is required.
- Upgrade LifeKeeper if required to support the new OS / Kernel. If you do not upgrade LifeKeeper, you must still run the LifeKeeper setup again to update the settings corresponding to the new OS / Kernel
- Start up LifeKeeper.
- Switch all the resource hierarchies to the upgraded server.
- Execute these steps for all the nodes in the LifeKeeper cluster. For clusters containing a dedicated Witness/Quorum node (a node with no resource instances) no switching of applications is required prior to updating LifeKeeper.
- Restart the LifeKeeper GUI (via /opt/LifeKeeper/bin/lkGUIapp) if it was open during the upgrade.
Scenarios Requiring a Reboot After Installing/Upgrading
- The system must be rebooted if the LifeKeeper setup script is unable to reload any required kernel modules. In this case, the setup script will display a warning message resembling either of the following:
- “Unable to reload modules after adding LifeKeeper for Linux specific configuration information in /etc/modprobe.d. Please reboot your system after setup completes to ensure these modules load correctly and allow LifeKeeper for Linux to function properly.“
- “Updated some modules for DataKeeper. Reboot the system to use the new module.“
- The system must be rebooted after disabling secure boot. If secure boot is enabled when DataKeeper is installed, the setup script will display a warning message such as:
- “Secure Boot cannot be enabled in a DataKeeper environment. Please take one of the following actions: a) Disable Secure Boot [recommended] or b) Disable signature verification (mokutil —disable-validation)“.
New or Deprecated Mount Options After a Kernel Upgrade
When upgrading the Linux kernel, it is possible that some existing file system mount options may be deprecated in the new kernel or that the new kernel may add new default mount options to existing mounts. For example, the “nobarrier” mount option was deprecated in RedHat Enterprise Linux 8, and some kernel versions have added new default mount options such as “logbufs=8” and “logbsize=32k”.
If a LifeKeeper-protected file system resource contains mount options which become deprecated after a kernel upgrade, the deprecated options should be removed from the list of mount options for the LifeKeeper resource on every server in the cluster. See the Modifying Mount Options for a LifeKeeper File System Resource section for more details.
If new default mount options are added by the kernel to an existing LifeKeeper-protected mount point after a kernel upgrade, then the new options should be added to the list of mount options for the LifeKeeper resource on every server in the cluster. See the Modifying Mount Options for a LifeKeeper File System Resource section for more details.
Post your comment on this topic.