In this section we will deploy an SAP S/4HANA 1909 AS ABAP environment and use the LifeKeeper SAP Recovery Kit to protect the ABAP SAP Central Services (ASCS) and corresponding Enqueue Replication Server (ERS) instances. While it is possible to protect a redundant Primary Application Server (PAS) instance using LifeKeeper, we will instead take the approach of creating an external application server pool consisting of one PAS instance and one Additional Application Server (AAS) instance spread across two availability zones.
Since the PAS and AAS instances will not be protected by LifeKeeper, this guide only details the installation and configuration of the highly-available Central Services (ASCS/ERS) cluster. This section also does not describe the configuration of a highly-available database cluster (e.g., using SAP HANA). Please see the section of the guide relevant to the database that is being installed and adapt it to fit your planned SAP landscape.
The details of the Central Services cluster nodes that will be created are given below. The instance provisioning described in this guide gives the minimum required specifications to run the applications in an evaluation context, and is not generally sufficient to handle productive workloads. Consult the documentation provided by SAP and your cloud provider for best practices when provisioning VM’s in a production environment. For example, see Hardware Requirements for SAP System Hosts.
The operating system and configuration used in the deployment must be supported by SAP, SIOS, and your cloud provider. Consult SAP’s Product Availability Matrix (PAM) in order to determine which operating systems are supported for various versions of SAP NetWeaver and S/4HANA.
Cluster Architecture for Evaluation Deployment
- node-a (Primary ASCS instance host, Standby ERS instance host, Primary NFS server)
- Private IP: 10.20.1.10
- vCPU’s: 2
- Memory: 4GB
- SAP NFS shares mounted
- SAP instances ASCS10 and ERS20 installed under SID ‘SPS’
- LifeKeeper for Linux installed with quorum/witness functionality enabled and the following recovery kits installed:
- SAP Recovery Kit
- SIOS DataKeeper
- NFS Recovery Kit
- [AWS] EC2 Recovery Kit
- [Azure, Google Cloud] LB Health Check Kit
- node-b (Primary ERS instance host, Standby ASCS instance host, Standby NFS server)
- Private IP: 10.20.2.10
- vCPU’s: 2
- Memory: 4GB
- SAP NFS shares mounted
- SAP instances ASCS10 and ERS20 installed under SID ‘SPS’
- LifeKeeper for Linux installed with quorum/witness functionality enabled and the following recovery kits installed:
- SAP Recovery Kit
- SIOS DataKeeper
- NFS Recovery Kit
- [AWS] EC2 Recovery Kit
- [Azure, Google Cloud] LB Health Check Kit
- node-c (Quorum/Witness node)
- Private IP: 10.20.3.10
- vCPU’s: 1
- Memory: 2GB
- LifeKeeper for Linux installed with quorum/witness functionality enabled
- node-d (Primary Application Server host)
- Private IP: 10.20.1.5
- vCPU’s: 2
- Memory: 4GB
- SAP NFS shares mounted
- SAP PAS instance D01 installed under SID ‘SPS’
- node-e (Additional Application Server host)
- Private IP: 10.20.2.5
- vCPU’s: 2
- Memory: 4GB
- SAP NFS shares mounted
- SAP AAS instance D02 installed under SID ‘SPS’
- ASCS10 virtual hostname: sps-ascs
- ERS20 virtual hostname: sps-ers
- NFS shared file systems:
- /sapmnt/SPS
- /usr/sap/trans
- SIOS DataKeeper replicated file systems:
- /usr/sap/SPS/ASCS10
- /usr/sap/SPS/ERS20
Cloud-Specific Architecture Diagrams
SIOS Protected ASCS/ERS Cluster on Amazon Web Services (AWS)
SIOS Protected ASCS/ERS Cluster on Microsoft Azure
SIOS Protected ASCS/ERS Cluster on Google Cloud
Prerequisites
In order to follow this guide, the following prerequisite steps must already be complete:
- Three VM instances (node-a, node-b, and node-c) have been created using the networking conventions described earlier in the guide. Firewall rules are in place to allow inter-node communication as well as SSH connections. See Configuring Network Components and Creating Instances for details.
- All three VM instances have been configured to run LifeKeeper for Linux. In particular, SELinux is disabled, the firewall is disabled, the /etc/hosts file on each node contains entries to resolve each node’s hostname to its private IP, and the root user has the same password on each node. See Configure Linux Nodes to Run LifeKeeper for Linux for details.
- LifeKeeper for Linux has been installed on all three nodes with quorum/witness functionality enabled, and all additional required recovery kits have been installed on node-a and node-b. On node-c (the witness node), no additional recovery kits beyond the core LifeKeeper installation are required. All necessary SIOS licenses have been installed on each node. See Install LifeKeeper for Linux for details.
- LifeKeeper communication paths have been defined between all pairs of cluster nodes. Note that this requires creation of three bi-directional communication paths (node-a ↔ node-b, node-a ↔ node-c, node-b ↔ node-c). See Login and Basic Configuration Tasks for details.
After completing all of these tasks, the LifeKeeper GUI should resemble the following image.
Post your comment on this topic.