The directory structure for the database will be different for each database management system that is used with the SAP system. Please consult the SAP installation guide specific to the database management system for details on the directory structure for the database. All database files must be located on shared disks to be protected by the LifeKeeper Recovery Kit for the database. Consult the database specific Recovery Kit Documentation for additional information on protecting the database.
See the Directory Structure Diagram below for a graphical depiction of the SAP directories described in this section.
The following types of directories are created during installation:
Physically shared directories (reside on global host and shared by NFS):
/<sapmnt>/<SAPSID> – Software and data for one SAP system (should be mounted for all hosts belonging to the same SAP system)
/usr/sap/trans – Global transport directory (has to have an export point)
Logically shared directories that are bound to a node such as /usr/sap with the following local directories (reside on the local host with symbolic links to the global host):
Local directories (reside on the local host and shared) that contain the SAP instances such as:
/usr/sap/<SAPSID>/DVEBMGS<No.> — Primary application server instance directory
/usr/sap/<SAPSID>/D<No.> — Additional application server instance directory
/usr/sap/<SAPSID>/ASCS<No.> — ABAP central services instance (ASCS) directory
/usr/sap/<SAPSID>/SCS<No.> — Java central services instance (SCS) directory
/usr/sap/<SAPSID>/ERS<No.> — Enqueue replication server instance (ERS) directory for the ASCS and SCS
The SAP directories: /sapmnt/<SAPSID> and /usr/sap/trans are mounted from NFS; however, SAP instance directories (/usr/sap/<SAPSID>/<INSTTYPE><No.>) should always be mounted on the cluster node currently running the instance. Do not mount such directories with NFS. The required directory structure depends on the chosen configuration. There are several issues that dictate the required directory structure.
NFS Mount Points and Inodes
LifeKeeper maintains NFS share information using inodes; therefore, every NFS share is required to have a unique inode. Since every file system root directory has the same inode, NFS shares must be at least one directory level down from root in order to be protected by LifeKeeper. For example, referring to the information above, if the /usr/sap/trans directory is NFS shared on the SAP server, the /trans directory is created on the shared storage device which would require mounting the shared storage device as /usr/sap. It is not necessarily desirable, however, to place all files under /usr/sap on shared storage which would be required with this arrangement. To circumvent this problem, it is recommended that you create an /exports directory tree for mounting all shared file systems containing directories that are NFS shared and then create a soft link between the SAP directories and the /exports directories, or alternately, locally NFS mount the NFS shared directory. (Note: The name of the directory that we refer to as /exports can vary according to user preference; however, for simplicity, we will refer to this directory as /exports throughout this documentation.) For example, the following directories and links/mounts for our example on the SAP Primary Server would be:
|/trans||created on shared file system and shared through NFS|
|/exports/usr/sap||mounted to / (on shared file system)|
|/usr/sap/trans||soft linked to /exports/usr/sap/trans|
Likewise, the following directories and links for the <sapmnt>/<SAPSID> share would be:
|/<SAPSID>||created on shared file system and shared through NFS|
|/exports/sapmnt||mounted to / (on shared file system)|
|<sapmnt>/<SAPSID>||NFS mounted to <virtual SAP server>:/exports/sapmnt/<SAPSID>|
Detailed instructions are given for creating all directory structures and links in the configuration steps later in this documentation. See the NFS Server Recovery Kit Documentation for additional information on inode conflicts and for information on using the new features in NFSv4.
Local NFS Mounts
The recommended directory structure for SAP in a LifeKeeper environment requires a locally mounted NFS share for one or more SAP system directories. If the NFS export point for any of the locally mounted NFS shares becomes unavailable, the system may hang while waiting for the export point to become available again. Many system operations will not work correctly, including a system reboot. You should be aware that the NFS server for the SAP cluster should be protected by LifeKeeper and should not be manually taken out of service while local mount points exist.
To avoid accidentally causing your cluster to hang by inadvertently stopping the NFS server, please follow the recommendations listed in the NFS Considerations topic.
When NFS shares are not accessible the unmount can fail. LifeKeeper will attempt to unmount the filesystem multiple times. These multiple attempts will typically succeed in eventually taking the resource out of service. However, this will cause delays in taking the resource out of service. To avoid these retries, use ‘nfsvers=3, proto=udp’ mount options.
NFS Mounts and su
LifeKeeper accomplishes many database and SAP tasks by executing database and SAP operations using the su – <admin name> -c <command> command syntax. The su command, when called in this way, causes the login scripts in the administrator’s home directory to be executed. These login scripts set environment variables to various SAP paths, some of which may reside on NFS mounted shares. If these NFS shares are not available for some reason, the su calls will hang, waiting for the NFS shares to become available again.
Since hung scripts can prevent LifeKeeper from functioning properly, it is desirable to configure your servers to account for this potential problem. The LifeKeeper scripts that handle SAP resource remove, restore and monitoring operations have a built-in timer that prevents these scripts from hanging indefinitely. No configuration actions are therefore required to handle NFS hangs for the SAP Application Recovery Kit.
Note that there are many manual operations that unavailable NFS shares will still affect. You should always ensure that all NFS shares are available prior to executing manual LifeKeeper operations.
Location of <INST> directories
Since the /usr/sap/<SAPSID> path is not NFS shared, it can be mounted to the root directory of the file system. The /usr/sap/<SAPSID> path contains the SYS subdirectory and an <INST> subdirectory for each SAP instance that can run on the server. For certain configurations, there may only be one <INST> directory, so it is acceptable for it to be located under /usr/sap/<SAPSID> on the shared file system. For other configurations, however, the backup server may also contain a local AS instance whose <INST> directory should not be on a shared file system since it will not always be available. To solve this problem, it is recommended that for certain configurations, the PAS’s, ASCS’s or SCS’s /usr/sap/<SAPSID>/<INST>, /usr/sap/<SAPSID>/<ASCS-INST> or /usr/sap/<SAPSID>/<SCS-INST> directories should be mounted to the shared file system instead of /usr/sap/<SAPSID> and the /usr/sap/<SAPSID>/SYS and /usr/sap/<SAPSID>/<AS-INST> for the AS should be located on the local server.
For example, the following directories and mount points should be created for the ABAP+Java Configuration:
|/usr/sap/<SAPSID>/DVEBMGS<Instance #>||mounted to / (on shared file system)|
|/usr/sap/<SAPSID>/SCS<Instance #>||mounted to / (on shared file system)|
|/usr/sap/<SAPSID>/ERS<Instance #> (for SCS instance)||should be locally mounted on all cluster nodes or mounted from a NAS share (should not be mounted on shared file system)|
|/usr/sap/<SAPSID>/ASCS<Instance #>||mounted to / (on shared file system)|
| /usr/sap/<SAPSID>/ERS<Instance #>
(for ASCS instance)
|should be locally mounted on all cluster nodes or mounted from a NAS share (should not be mounted on shared file system)|
|/usr/sap/<SAPSID>/AS<Instance #>||created for AS on backup server|
Note: The Enqueue Replication Server (ERS) resource will be in-service (ISP) on the primary node in your cluster. However, the architecture and function of the ERS requires that the actual processes for the instance run on the backup node. This allows the standby server to hold a complete copy of the lock table information for the primary server and primary enqueue server instance. When the primary server running the enqueue server fails, it will be restarted by LifeKeeper on the backup server on which the ERS process is currently running. The lock table (replication table) stored on the ERS is transferred to the enqueue server process being recovered and the new lock table is created from it. Once this process is complete, the active replication server is then deactivated (it closes the connection to the enqueue server and deletes the replication table). LifeKeeper will then restart the ERS processes on the new current backup node (formerly the primary) which has been inactive until now. Once the ERS process becomes active, it connects to the enqueue server and creates a replication table. For more information on the ERS process and SAP architecture features, visit http://help.sap.com and search for Enqueue Replication Service.
Since the replication server is always active on the backup node, it cannot reside on a LifeKeeper protected file system as the file system would be active on the primary node while the replication server process would be active on the backup node. Therefore, the file systems that ERS uses should be locally mounted on all cluster nodes or mounted from a NAS share.
Directory Structure Diagram
The directory structure required for LifeKeeper protection of ABAP only environments is shown graphically in the figure below. See the Abbreviations and Definitions section for a description of the abbreviations used in the figure.
Directory Structure Example
Directory Structure Options
The configuration steps presented in this documentation are based on the directory structure and diagrams described above. This is the recommended directory structure as tested and certified by SIOS Technology Corp.
There are other directory structure variations that should also work with the SAP Recovery Kit, although not all of them have been tested. For configurations with directory structure variations, you should follow the guidelines below.
- The /usr/sap/trans directory can be hosted on any server accessible on the network and does not have to be the PAS server. If you locate the /usr/sap/trans directory remotely from the PAS, you will need to decide whether access to this directory is mission critical. If it is, then you may want to protect it with LifeKeeper. This will require that it be hosted on a shared or replicated file system and protected by the NFS Server Recovery Kit. If you have other methods of making the /usr/sap/trans directory available to all of the SAP instances without NFS, this is also acceptable.
- The /usr/sap/trans directory does not have to be NFS shared regardless of whether it is located on the PAS server.
- The /usr/sap/trans directory does not have to be on a shared file system if it is not NFS shared or protected by LifeKeeper.
- The directory structure and path names used to export NFS file systems shown in the diagrams are examples only. The path /exports/usr/sap could also be /exports/sap or just /sap.
- The /usr/sap/<SAPSID>/<INST> path needs to be on a shared file system at some level (except in the case of an ERSv1 instance). It does not matter which part of this path is the mount point for the file system. It could be /usr, /usr/sap, /usr/sap/<SAPSID> or /usr/sap/<SAPSID>/<INST>.
- The /sapmnt/<SAPSID> path needs to be on a shared file system at some level. The configuration diagrams show this path as NFS mounted, although this is an SAP requirement and not a LifeKeeper requirement.
Post your comment on this topic.