NFS server should have been installed on both cluster nodes prior to installation of SIOS as a prerequisite.
Create the NFS exports based on the SAP’s requirements in your SAP design. Below are examples that may be use as a guide but not a representation of your SAP environment.
LifeKeeper maintains NFS share information using inodes; therefore, every NFS share is required to have a unique inode. Since every file system root directory has the same inode, NFS shares must be at least one directory level down from root in order to be protected by LifeKeeper. For example, referring to the information above, if the /usr/sap/trans directory is NFS shared on the SAP server, the /trans directory is created on the shared storage device which would require mounting the shared storage device as /usr/sap. It is not necessarily desirable, however, to place all files under /usr/sap on shared storage which would be required with this arrangement. To circumvent this problem, it is recommended that you create an /exports directory tree for mounting all shared file systems containing directories that are NFS shared and then create a soft link between the SAP directories and the /exports directories, or alternately, locally NFS mount the NFS shared directory. (Note: The name of the directory that we refer to as /exports can vary according to user preference; however, for simplicity, we will refer to this directory as /exports throughout this documentation.) For example, the following directories and links/mounts for our example on the SAP Primary Server would be:
/trans | created on share file system and shared through NFS |
/exports/usr/sap | mounted to / (on shared file system) |
/user/sap/trans | soft linked to /exports/usr/sap/trans |
The following directories and links for the <sapmnt>/<SAPSID> share would be:
/<SAPSID> | created on shared file systems and shared through NFS |
/exports/sapmnt | mounted to / (on shared file system) |
<sapmnt>/<SAPSID> | NFS mounted to <virtual SAP server>:/exports/sapmnt/<SAPSID> |
Local NFS Mounts
The recommended directory structure for SAP in a LifeKeeper environment requires a locally mounted NFS share for one or more SAP system directories. If the NFS export point for any of the locally mounted NFS shares becomes unavailable, the system may hang while waiting for the export point to become available again. Many system operations will not work correctly, including a system reboot. You should be aware that the NFS server for the SAP cluster should be protected by LifeKeeper and should not be manually taken out of service while local mount points exist.
Location of <INST> Directories
Since the /usr/sap/<SAPSID> path is not NFS shared, it can be mounted to the root directory of the file system. The /usr/sap/<SAPSID> path contains the SYS subdirectory and an <INST> subdirectory for each SAP instance that can run on the server. For certain configurations, there may only be one <INST> directory, so it is acceptable for it to be located under /usr/sap/<SAPSID> on the shared file system. For other configurations, however, the backup server may also contain a local AS instance whose <INST> directory should not be on a shared file system since it will not always be available. To solve this problem, it is recommended that for certain configurations, the PAS’s, ASCS’s or SCS’s /usr/sap/<SAPSID>/<INST>, /usr/sap/<SAPSID>/<ASCS-INST> or /usr/sap/<SAPSID>/<SCS-INST> directories should be mounted to the shared file system instead of /usr/sap/<SAPSID> and the /usr/sap/<SAPSID>/SYS and /usr/sap/<SAPSID>/<AS-INST> for the AS should be located on the local server.
For example, the following directories and mount points should be created for the ABAP+Java Configuration
usr/sap/<SAPSID>/DVEBMS<No. > | mounted to / (Replicated non-NFS file system ) |
usr/sap/<SAPSID>/SCS<No. > | mounted to / (Replicated non-NFS file system ) |
usr/sap/<SAPSID>/ERS<No. > (for SCS instance) |
should be locally mounted on all cluster nodes or mounted from a NAS share (should not be mounted on shared storage) |
usr/sap/<SAPSID> /ASCS<Instance No. > | mounted to / (Replicated from non-NFS file system) |
usr/sap/<SAPSID>/ERS<No. > (for ASCS instance) |
should be locally mounted on all cluster nodes or mounted from a NAS share (should not be mounted on shared storage) |
usr/sap/<SAPSID>AS<Instance No. > | created for AS backup server |
Mount NFS and Move File Systems
After mount points has been created for the main SAP file systems, mount them accordingly (required). At this point, stop all SAP services before proceeding with these steps.
mount /dev/sap/sapmnt /exports/sapmnt
mount /dev/sap/saptrans /exports/saptrans
Move Data to NFS
- Edit /etc/exports and insert the mount points for SAP’s main directories.
/exports/sapmnt * (rw,sync,no_root_squash)
/exports/saptrans * (rw,sync,no_root_squash)
Example NFS Export
# more /etc/exports
/exports/sapmnt 10.2.0.69(rw,sync,all_squash,anonuid=0,anongid=1001)
/exports/sapmnt 10.2.0.11(rw,sync,all_squash,anonuid=0,anongid=1001)
/exports/usr/sap/<instance name>/ASCS01 10.2.0.69(rw,sync,all_squash,anonuid=0,anongid=1001)
/exports/sap/<instance name>/ASCS01
10.2.0.11(rw,sync,all_squash,anonuid=0,anongid=1001)
# more /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Nov 9 20:20:10 2015
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=367df610-4210-4a5a-8c8d-51ddf499fc17 / xfs defaults 0 0
/dev/xvdb swap swap defaults 0 0
/dev/xvdc /tmp xfs nodev,nosuid,noexec,relatime 0 0
/dev/xvdp1 /var xfs defaults 0 0
/dev/xvdp2 /var/log xfs defaults 0 0
/dev/xvdp3 /var/log/audit xfs defaults 0 0
/dev/xvdp4 /home xfs defaults,nodev 0 0
/tmp /var/tmp none bind,nodev,nosuid 0 0
/dev/xvdj /usr/sap xfs defaults 0 0
/dev/xvdg /exports/usr/sap/P4G/ASCS01 xfs defaults 0 0
/dev/xvdh /usr/sap/P4G/D00 xfs defaults 0 0
/dev/xvdi /sapcd xfs defaults 0 0
/dev/xvdk /exports/sapmnt xfs defaults 0 0
<nfsvip>:/exports/usr/sap/P4G/ASCS01 /usr/sap/<instance name>/ASCS01 nfs nfsvers=3,proto=udp,rw,sync,bg 0 0
<nfsvip>:/exports/sapmnt /sapmnt nfs nfsvers=3,proto=udp,rw,sync,bg 0 0
<nfsvip>:/exports/usr/sap/P4G/ASCS01 /usr/sap/PG4/ASCS01 nfs nfsvers=3,proto=udp,rw,sync,bg 0 0 (Note: This ERS entry will only be present if using an ERSv2 configuration with a shared ERS filesystem.)
- Start the NFS server using the systemctl start nfs-server.service command. If the NFS server is already active, you may need to do an “exportfs -va” to export those mount points.
- On both node1 & 2, execute the following mount commands (note the usage of udp; this is important for failover and recovery), ensuring you are able to mount the NFS shares.
mount {virtual ip}:/exports/sapmnt/<PG4> /sapmnt/<PG4> -o rw,sync,bg,udp
mount {virtual ip}:/exports/saptrans /usr/sap/trans -o rw,sync,bg,udp
- From node 1, copy the necessary file systems from the /usr/sap and /sapmnt or any other required files into the NFS mount points, mounted from the NFS servers onto node 1.
- Log in to SAP and start SAP (after su to stcadm).
startsap sap{No.}
- Make sure all processes have started.
ps –ef | grep en.sap (2 processes)
ps –ef | grep ms.sap (2 processes)
ps –ef | grep dw.sap (17 processes)
“SAP Logon” or “SAP GUI for Windows” is an SAP supplied Windows client the Windows client. The program can be downloaded from the SAP download site. The virtual IP address may be used as the “Application Server” on the Properties page. This ensures that a connection to the primary machine where the virtual ip resides is active.
- If not already done, create the Data Replication Cluster resource on the NFS shares mount points to replicate the data from node1 to node2.
Reduce Switchover Times using SAP and NFSv4/TCP
In SAP/NFS environment the following changes can be made to reduce switchover times with SAP and NFSv4/TCP:
Take all LK NFS resources out of service, then execute:
# systemctl stop nfs
# echo 10 > /proc/fs/nfsd/nfsv4gracetime
# echo 10 > /proc/fs/nfsd/nfsv4leasetime
# systemctl start nfs
To set the grace and lease times persistently, for example:
On RHEL 8, execute the following:
# nfsconf —set nfsd grace-time 10
# nfsconf —set nfsd lease-time 10
# systemctl restart nfs-server
Post your comment on this topic.