Here we will discuss how to switch between nodes using Google Cloud’s Internal Load Balancer by creating an Internal Load Balancer that routes traffic to the active node. Clients connect to the Frontend IP address provided by Internal Load Balancer. The Internal Load Balancer regularly checks the health of each VM in the backend pool using a user-defined “Health Check Probe” function, and then routes client requests to the active node.

Create Instance Groups

In order to use an Internal Load Balancer, the first step is to create Instance Groups containing each of our nodes. Create Instance Groups with the following parameters:

Name
Region
Zone
Network
Subnetwork
VM Instance
lk-ig01 <Deployment region> a lk-vpc lk-subnet node-a
lk-ig02 b node-b
  1. Select “Compute Engine” > “Instance Group” from the navigation menu.

  1. Select “CREATE INSTANCE GROUP”, “New Unmanaged instance group”, then enter the parameters listed in the table.

  1. Create lk-ig02 following the same steps.
  1. Two Instance groups are now created.

Create an Internal Load Balancer

An internal load balancer distributes traffic to the active node and can be created using the following steps. In order to configure the load balancer, start the application on node-a (to ensure the load balancer is working before configuring it through LifeKeeper).

  1. On the navigation menu, select “Network Service” > “Load Balancing”.

  1. Click “Start Configuration” under “TCP Load Balancing”.

  1. Select “Only between my VMs” (meaning Internal Load Balancer), and Single region only. Click “Continue”.

  1. Specify the name as lk-lb, then configure the backend parameters. Select Region, Network as lk-vpc, then select two instance groups (lk-ig01 and lk-ig02) from the dropdown menu to be added to the backend pool.

  1. Move on to “Health check” located at the bottom of the Backend configuration page. Select “Create a health check”.

  1. The health check configuration screen appears. Select the name lk-health-check, select Port and specify the port used by the LB Health Check resource (eg. 54321). Check other parameters and click Save and Continue.

  1. Move to the “Frontend configuration” tab. Provide the name and subnetwork for the frontend. In this example we will use the name lk-frontend and select the lk-subnet subnetwork. Under “IP address”, choose “Ephemeral (Custom)” and specify the custom ephemeral IP address to be used by the load balancer frontend. In this example we will use IP address 10.20.0.10. Under “Ports”, select either “Single”, “Multiple”, or “All” as appropriate, based on the application that traffic is being routed to, and enter any required ports. In this example we will forward traffic on TCP port 80.

  1. Move on to the “Review and Finalize” tab and click “Create”.

  1. Once the Load Balancer is created, the status will be displayed. As the application (such as httpd) runs on only one of these nodes, you may see the icon, but this behavior is expected.

  1. Now the Internal Load Balancer is configured. Once you install the application to protect on node-a, you can connect to it through the frontend IP (10.20.0.10) of the ILB you have just configured. The following example shows how to check the current target for http traffic via the ILB.

Disable IP Forwarding

In certain configurations, two applications which are hosted on different backend servers of an internal load balancer may need to communicate through the frontend IP of the load balancer itself. This happens, for example, when using internal load balancers to manage floating IP failover for ASCS and ERS instances in an SAP AS ABAP deployment.

As a consequence of the route-based load balancer implementation in Google Cloud, traffic that is sent from a backend server of a load balancer to the frontend IP of the load balancer will by default be routed back to the same backend server that it originated from, regardless of whether or not it is considered healthy by the load balancer’s health checks. See the Traffic is sent to unexpected backend VMs section of Troubleshooting Internal TCP/UDP Load Balancing for more details.

The following steps provide a method for disabling IP forwarding on Google Cloud VMs, which also resolves the intra-backend traffic issue described above. These steps must be performed on all VMs which will be added as backend targets for the load balancer.

  1. Set ip_forwarding = false in the [NetworkInterfaces] section of the /etc/default/instance_configs.cfg file.
# vi /etc/default/instance_configs.cfg
# cat /etc/default/instance_configs.cfg
[...]
[NetworkInterfaces]
[...]
ip_forwarding = false
[...]

  1. Once the changes have been saved successfully, reboot the VM to allow the changes to take effect.

When IP forwarding is disabled, the LifeKeeper LB Health Check resource that responds to the load balancer’s health checks must have a child IP resource protecting the frontend IP address of the load balancer and using net mask /32 (equivalently, 255.255.255.255). Without this child IP resource, the LB Health Check resource will fail to come in-service. See the Create Frontend IP Resource(s) and Add Frontend IP Resources as Dependencies of LB Health Check Resource sections of Responding to Load Balancer Health Checks for details on creating these resources and adding them as dependencies of the LB Health Check Resource.

See the Test LB Health Check Resource Switchover and Failover section of Responding to Load Balancer Health Checks for details on how to test for successful operation of the load balancer and corresponding LB Health Check resource hierarchy.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment