Overview
When configuring LifeKeeper on AWS, typically the Recovery Kit for EC2 or Recovery Kit for Route 53 is used. Both use the AWS CLI to control AWS services; however, there are some cases where these kits cannot be used.
- You do not want to make configuration changes via the AWS CLI because security is strict.
- You want to use DNS name access from clients in an environment with no Internet connection (Route 53 does not support VPC endpoint(PrivateLink) yet).
In such cases, the combination of AWS Network Load Balancer (NLB) and LB Health Check Kit provide a switching mechanism similar to that of Azure and GCP. Clients access the NLB DNS name, traffic is forwarded to a node that has been successfully probed by using the NLB health check in combination with the LB health check.
AWS Load Balancer Specific Settings
Deploying a Network Load Balancer
The Network Load Balancer (NLB) forwards incoming traffic to instances registered in the target group. NLB IP addresses in AWS are assigned to subnets per Availability Zone (AZ). Therefore, redundancy can be ensured by selecting two Availability Zones. However, NLB IP addresses are assigned to each subnet and cannot be used for client access. Therefore, if you access using the DNS name of the NLB instead of the IP address assigned to the NLB, it will be converted to one of the IP addresses. This allows a client access even when one of the Availability Zones is unavailable. By default, traffic forwarded to an NLB IP address can only be forwarded to targets on that subnet. By enabling cross-zone load balancing, it can be forwarded to targets in different subnets.
Connecting from the client
- The Client attempts to connect to the application (Listener in the above figure) with the NLB DNS name and the port number of the application (XXXX-nlb1-YYYYY.elb.region.amazonaws.com and 1521 in the above figure). The DNS name is converted to the IP address of the NLB subnet via AWS internal Route 53, which is used to access the NLB (10.0.1.151 or 10.0.2.181 in the above example).
- The NLB registers the target group to which it should forward specific protocols and ports. At this time, check on which node the health probe port (12345) responds.
- The active node responds to health probes. With LifeKeeper, LBHC is only active on either machine, so the NLB health probe always detects that only the active node is functioning. As a result, NLB always assigns requests to the active node (in the figure above, AWSNODE1 is active).
- The NLB forwards connection requests from clients to the active node. The connection request then reaches the active node with the destination address replaced from the NLB address to the node’s real address (10.0.1.10 in the figure above).
Requirements for using this configuration
To use this configuration, there are several requirements that must be met when preparing the environment. The following is a summary of the requirements for the AWS environment and the instances that will be created on it.
AWS Requirements
Create an underlying environment on AWS to provide the service. The requirements for using this configuration are as follows.
Amazon Virtual Private Cloud (VPC)
- The VPC where the cluster nodes will be located must be set up in AWS.
- The VPC, where the clients are located, can be in the same region as the VPC where the cluster nodes are located, or it can be in a different region, as long as it is accessible by IP address.
- An application on cluster nodes receives the real IP, so each application must continue to work even if the real IP is switched. In other words, if the configuration file is not set for each cluster node, you need to register the DNS name as like the following on the local host (c:\Windows\System32\drivers\etc\hosts) of each cluster node, and use this DNS name (lkvip in this example) to configure the virtual address setting part.
<Real IP> lkvip
- Use with Recovery Kit for EC2 or Recovery Kit for Route 53 is currently not supported, but other Recovery Kits are supported.
AWS Load Balancer
- Use the Network Load Balancer (NLB) for the load balancer.
- If cluster nodes are located in different Availability Zones, cross-zone load balancing must be enabled in the load balancer attributes.
LifeKeeper Software Requirements
The same version of LifeKeeper software and patches must be installed on each server. The Application Recovery Kit (ARK) required for this configuration is as follows:
- LB Health Check Kit
Setup Procedure
This section describes a general procedure to setup the environment shown in the overview.
Creating a Network Load Balancer
Create a network load balancer according to the following table.
Network Load Balancer | |
---|---|
Load balancer name | Any |
Scheme | Internal |
IP address type | IPv4 |
Network mapping | Select the subnet of the Availability Zone where the cluster node resides |
Security group | Allow communication with registered targets on both listener and the health check port |
Listener | Select the listener port with protocol TCP. 1521 for Oracle, 5432 for PostgreSQL, etc. Then select the target group to forward. |
Target Group | |
---|---|
Target type | Instance |
Protocol: Port | Specify the port to be forwarded for TCP and port, e.g. 1521 for Oracle, 5432 for PostgreSQL. |
IP address type | IPv4 |
Health check protocol | TCP |
Health check details | Overwrite: LBHC port Healthy threshold: 2 Unhealthy threshold: 3 Timeout: 5 seconds Interval: 10 seconds |
Available instance | Select cluster nodes to be forwarded |
Port of the selected instance | Specifies the port to be forwarded, e.g. 1521 for Oracle, 5432 for PostgreSQL. |
Enabling cross-zone load balancing
Cross-zone load balancing is disabled after creating a load balancer, so enable it from “Edit load balancer attributes”.
Tuning parameters configured in health check details
With the values used in the sample above, the primary server will start forwarding traffic 10 seconds ((healthy threshold - 1) x interval = (2-1) × 10) after the switchover is completed. If you want to start it a little earlier (e.g. after 5 seconds), set the interval to 5. Also, after the switchover starts, traffic forwarding stops on the secondary server after 15 seconds ((unhealthy threshold) x timeout = 3 × 5). Adjust the value to fit within the time required for the switchover.
Preparing LifeKeeper
Create an environment that meets the “Requirements for using this configuration”. Install LifeKeeper on each instance and create a communication path between node1 and node 2. If necessary, configure Quorum/Witness to prevent split brain.
Creating IP Resources
When using a resource that requires an IP resource, such as an Oracle resource, create an IP resource with a real IP address. Traffic is forwarded from the load balancer to the real IP address.
Creating resources for services to be protected
Create a resource for the service to be protected. If IP resources are required for resource creation, specify the resources created in “Creating IP Resources” above. Things to consider for using real IP addresses are described below.
When using Oracle resources
When using Oracle Home in a non-shared configuration, configure the host of each node with a real IP address when setting up the listener during the Oracle installation.
When using Oracle Home shared configuration, the DNS name of the NLB accessed from the client cannot be used. However, you can configure a listener by registering a DNS name such as
<Real IP> oravip
on each cluster node on the local host (c:\Windows\System32\drivers\etc\hosts) and configuring the listener with this DNS name.
Refer to Installing Oracle
When using IIS resources
When configuring IIS, please refer to the IIS Configuration Considerations to configure the real IP address.
Creating an LB Health Check Resource
Create an LB Health Check resource. Create the Reply daemon port by specifying the port specified in the Network Load Balancer health check details. Set resource dependencies so that the LB Health Check resource is the parent resource.
Post your comment on this topic.