In this section we will deploy a highly-available SAP HANA 2.0 SPS04 cluster. Our configuration will use SAP HANA System Replication to replicate data from the active database host to the standby database host and will use the LifeKeeper SAP HANA Recovery Kit to handle monitoring, recovery, and failover the HDB database instance.

The details of the SAP HANA cluster nodes that will be created are given below. The instance provisioning described in this guide gives the minimum required specifications to run the applications in an evaluation context, and is not generally sufficient to handle productive workloads. Consult the documentation provided by SAP and your cloud provider for best practices when provisioning VM’s in a production environment. For example, see Hardware Requirements for SAP System Hosts.

The operating system and configuration used in the deployment must be supported by SAP, SIOS, and your cloud provider. Consult SAP’s Product Availability Matrix (PAM) in order to determine which operating systems are supported for various versions of SAP HANA.

Cluster Architecture for Evaluation Deployment

  • node-a (Primary database host)
    • Private IP: 10.20.1.10
    • vCPU’s: 8
    • Memory: 32GB
    • SAP HANA 2.0 SPS04 database instance HDB00 installed under SID ‘SPS’
    • SAP HANA System Replication parameters:
      • Site name: SiteA
      • Replication mode: primary
      • Operation mode: primary
    • LifeKeeper for Linux installed with quorum/witness functionality enabled and the following recovery kits installed:
      • SAP HANA Recovery Kit
      • [AWS] Amazon EC2 Recovery Kit
      • [Azure, Google Cloud] Generic Application Recovery Kit for Load Balancer Health Checks (GenLB Recovery Kit)
  • node-b (Standby database host)
    • Private IP: 10.20.2.10
    • vCPU’s: 8
    • Memory: 32GB
    • SAP HANA 2.0 SPS04 database instance HDB00 installed under SID ‘SPS’
    • SAP HANA System Replication parameters:
      • Site name: SiteB
      • Replication mode: sync
      • Operation mode: logreplay
    • LifeKeeper for Linux installed with quorum/witness functionality enabled and the following recovery kits installed:
      • SAP HANA Recovery Kit
      • [AWS] Amazon EC2 Recovery Kit
      • [Azure, Google Cloud] Generic Application Recovery Kit for Load Balancer Health Checks (GenLB Recovery Kit)
  • node-c (Quorum/Witness node)
    • Private IP: 10.20.3.10
    • vCPU’s: 1
    • Memory: 2GB
    • LifeKeeper for Linux installed with quorum/witness functionality enabled
  • SAP HANA virtual hostname: sps-hana

Cloud-Specific Architecture Diagrams

SIOS Protected SAP HANA Cluster on Amazon Web Services (AWS)

SIOS Protected SAP HANA Cluster on Microsoft Azure

SIOS Protected SAP HANA Cluster on Google Cloud

Prerequisites

In order to follow this guide, the following prerequisite steps must already be complete:

  1. Three VM instances (node-a, node-b, and node-c) have been created using the networking conventions described earlier in the guide. Firewall rules are in place to allow inter-node communication as well as SSH connections. See Configuring Network Components and Creating Instances for details.
  1. All three VM instances have been configured to run LifeKeeper for Linux. In particular, SELinux is disabled, the firewall is disabled, the /etc/hosts file on each node contains entries to resolve each node’s hostname to its private IP, and the root user has the same password on each node. See Configure Linux Nodes to Run LifeKeeper for Linux for details.
  1. LifeKeeper for Linux has been installed on all three nodes with quorum/witness functionality enabled and all required recovery kits installed. On node-c (the witness node), no additional recovery kits beyond the core LifeKeeper installation are required. All necessary SIOS licenses have been installed on each node. See Install LifeKeeper for Linux for details.
  1. LifeKeeper communication paths have been defined between all pairs of cluster nodes. Note that this requires creation of three bi-directional communication paths (node-a ↔ node-b, node-a ↔ node-c, node-b ↔ node-c). See Login and Basic Configuration Tasks for details.

After completing all of these tasks, the LifeKeeper GUI should resemble the following image.

Feedback

Thanks for your feedback.

Post your comment on this topic.

Post Comment