In a highly-available SAP environment, some important file systems are shared with and mounted on each server that hosts a LifeKeeper-protected SAP instance. Examples include the /sapmnt (or /sapmnt/<SID>) and /usr/sap/trans file systems. Typically these shared file systems are mounted at system boot by adding a mount entry for the file system to the /etc/fstab file. For example:

sapnfs:/export/sapmnt/SID /sapmnt/SID nfs rw,sync,bg 0 0

However, for compliance reasons, some administrators may be unable to add mount entries directly to the /etc/fstab file on their SAP cluster servers.

In this case an administrator may instead add fstab-style mount entries to the LifeKeeper “critical NFS mounts” file for each SAP resource, which is located on each server at /opt/LifeKeeper/subsys/appsuite/resources/sap/critical_nfs_mounts_<Tag>. Before attempting any administrative actions for an SAP resource, LifeKeeper will verify that each file system present in the critical_nfs_mounts file for that resource is mounted, and will attempt to mount any listed file system which is not currently mounted.

Example

For this example we will assume that we have a two-node cluster (with hostnames node-a and node-b) with protected SAP instances ASCS10 (protected by LifeKeeper resource SAP-SPS_ASCS10) and ERS20 (protected by LifeKeeper resource SAP-SPS_ERS20) installed under SAP system ID ‘SPS’. The SAP Mount file system for the SAP installation with SID ‘SPS’ is being shared by a highly-available NFS server cluster using virtual hostname ‘sapnfs’ and export point sapnfs:/export/sapmnt/SPS. This file system must be mounted at /sapmnt/SPS on each server before either protected instance (ASCS10 or ERS20) can be started there.

If the /etc/fstab file cannot be modified to mount this file system at boot, then the following entries can be added to the LifeKeeper critical_nfs_mounts files for both SAP resources on both servers.

/opt/LifeKeeper/subsys/appsuite/resources/sap/critical_nfs_mounts_SAP-SPS_ASCS10

# critical_nfs_mounts_SAP-SPS_ASCS10
# NFS shared file system mounts added to this file will be
automatically mounted by the SIOS LifeKeeper
# SAP Recovery Kit before performing any SAP administrative actions
for this resource.
#
# Duplicate entries in /etc/fstab with the same mount point take
precedence over this file.
sapnfs:/export/sapmnt/SPS /sapmnt/SPS nfs rw,sync,bg 0 0

/opt/LifeKeeper/subsys/appsuite/resources/sap/critical_nfs_mounts_SAP-SPS_ERS20

# critical_nfs_mounts_SAP-SPS_ERS20
# NFS shared file system mounts added to this file will be
automatically mounted by the SIOS LifeKeeper
# SAP Recovery Kit before performing any SAP administrative actions
for this resource.
#
# Duplicate entries in /etc/fstab with the same mount point take
precedence over this file.
sapnfs:/export/sapmnt/SPS /sapmnt/SPS nfs rw,sync,bg 0 0

Now suppose that we wish to bring the SAP-SPS_ERS20 resource in-service on node-b, but the /sapmnt/SPS file system is not currently mounted there. As long as the mount entry shown above exists in the critical_nfs_mounts_SAP-SPS_ERS20 file on node-b, LifeKeeper will automatically attempt to mount the file system while bringing the SAP-SPS_ERS20 resource in-service. When it performs the mount operation, a message similar to the following is logged:

Jan 1 00:00:00 node-b restore[16995]:
INFO:sap:restore:SAP-SPS_ERS20:112184:Mounting NFS shared file system
‘sapnfs:/export/sapmnt/SPS’ at mount point ‘/sapmnt/SPS’ with mount
options ‘rw,sync,bg’ on server node-b.

As long as the NFS server is available and the shared file system is properly exported, the file system should mount successfully, allowing the SAP-SPS_ERS20 resource to come in-service successfully on node-b.

Feedback

Thanks for your feedback.

Post your comment on this topic.

Post Comment