In this section we will deploy a highly-available SAP HANA 2.0 SPS04 cluster. Our configuration will use SAP HANA System Replication to replicate data from the active database host to the standby database host and will use the LifeKeeper SAP HANA Recovery Kit to handle monitoring, recovery, and failover the HDB database instance.

The details of the SAP HANA cluster nodes that will be created are given below. The instance provisioning described in this guide gives the minimum required specifications to run the applications in an evaluation context, and is not generally sufficient to handle productive workloads. Consult the documentation provided by SAP and your cloud provider for best practices when provisioning VM’s in a production environment. For example, see Hardware Requirements for SAP System Hosts.

  • node-a (Primary database host)
    • Private IP: 10.20.1.10
    • vCPU’s: 8
    • Memory: 32GB
    • SAP HANA 2.0 SPS04 database instance HDB00 installed under SID ‘SPS’
    • SAP HANA System Replication parameters:
      • Site name: SiteA
      • Replication mode: primary
      • Operation mode: primary
    • SIOS Protection Suite for Linux installed with quorum/witness functionality enabled, SAP HANA Recovery Kit installed
    • [GCP] GenLB Recovery Kit installed
  • node-b (Standby database host)
    • Private IP: 10.20.2.10
    • vCPU’s: 8
    • Memory: 32GB
    • SAP HANA 2.0 SPS04 database instance HDB00 installed under SID ‘SPS’
    • SAP HANA System Replication parameters:
      • Site name: SiteB
      • Replication mode: sync
      • Operation mode: logreplay
    • SIOS Protection Suite for Linux installed with quorum/witness functionality enabled, SAP HANA Recovery Kit installed
    • [GCP] GenLB Recovery Kit installed
  • node-c (Quorum/Witness node)
    • Private IP: 10.20.3.10
    • vCPU’s: 1
    • Memory: 2GB
    • SIOS Protection Suite for Linux installed with quorum/witness functionality enabled
  • SAP HANA virtual hostname: sps-hana

This section assumes that the following prerequisite steps have already been completed:

  1. Three VM instances (node-a, node-b, and node-c) have been created using the networking conventions described earlier in the guide. Firewall rules are in place to allow inter-node communication as well as SSH connections. See Configuring Network Components and Creating Instances for details.
  1. All three VM instances have been configured to run SIOS Protection Suite for Linux. In particular, SELinux is disabled, the firewall is disabled, the /etc/hosts file on each node contains entries to resolve each node’s hostname to its private IP, and the root user has the same password on each node. See Configure Linux Nodes to Run SIOS Protection Suite for Linux for details.
  1. SIOS Protection Suite for Linux has been installed on all three nodes with quorum/witness functionality enabled. The SAP Recovery Kit has been installed on node-a and node-b. On node-c (the witness node), no additional recovery kits beyond the core LifeKeeper installation are required. All necessary SIOS licenses have been installed on each node. See Install SIOS Protection Suite for Linux for details.
  1. LifeKeeper communication paths have been defined between all pairs of cluster nodes. Note that this requires creation of three bi-directional communication paths (node-a ↔ node-b, node-a ↔ node-c, node-b ↔ node-c). See Login and Basic Configuration Tasks for details.

After completing all of these tasks, the LifeKeeper GUI should resemble the following image.

フィードバック

お役に立ちましたか?

はい いいえ
お役に立ちましたか
理由をお聞かせください
フィードバックありがとうございました

このトピックへフィードバック

送信