DataKeeper Volume cluster resources can be extended to a DR Node for Disaster Recovery purposes. In the event of a complete failure of all systems in the cluster, data will be accessible on the DR Node (referred to as the “DR Node”). Here you will find instructions on how to set up this configuration, how to access your data on the DR Node, and how to bring your data back into service in the cluster after the cluster nodes have been restored.

Configuration Tasks

Configuring a non-clustered DataKeeper target node

Recommended configuration for the DR Node

  • Firewalls (Windows as well as any other firewall devices / software on the DR or cluster site) must allow access to DataKeeper-specified ports on the DR Node from all cluster nodes, and vice-versa. See Firewall Configurations for more information.
  • Configure a volume on the DR Node for each clustered DataKeeper Volume that is to be extended to the DR Node. The volume should be at least as big as the clustered volume.

Scenario 1 – Extending existing DataKeeper Volume resources

If you have already configured DataKeeper Volume resources in your cluster, you can extend these volumes to a DR Node using the DataKeeper MMC GUI by following these steps:

  1. Connect the DataKeeper GUI to the DR Node using the “Action / Connect To Server” option.
  1. Connect the DataKeeper GUI to the cluster node where the DataKeeper Volume resource is online.
  1. For each DataKeeper volume that you are extending to the DR Node:

a. In the Jobs view, choose the job that contains the volume to be extended.

b. Choose “Create a Mirror”.

c. Select the mirror Source Node, Volume, and source IP address.

d. Select the DR Node as Target, along with the Volume and IP Address.

e. Choose the mirror parameters and click “OK” to create the mirror.

f. Configure any additional mirror information needed

See Creating Mirrors with Multiple Targets for more information.

Scenario 2 – Creating a new DataKeeper Volume resource, and extending it to the DR Node.

If you do not have a DataKeeper Volume resources in your cluster that represents the volume that you want to extend to the DR Node, first create the clustered resource, then use the steps in “Scenario 1” above to extend it to the DR Node.

Scenario 3 – Extending a traditional shared-volume cluster to a DR Node using DataKeeper

See Extending a Traditional 2-node WSFC cluster to a third node using DataKeeper for detailed steps that will guide you through extending a shared-volume Microsoft clustered volume to another cluster node.

In this case, we are extending to a non-clustered node. This makes it unnecessary to perform step 2 (configure cluster Quorum settings) and step 7 (add the node to the cluster). Otherwise, the steps remain the same.

Configuration Summary

After you have extended your clustered volume(s) to a DR Node, you will be able to bring the volumes Online and Offline in the cluster as before. The DR Node will remain the mirror target under normal operating conditions.

If you wish, you can check the data on the non-mirrored system by using the “Pause and Unlock Target” option for the mirror whose target is the DR Node. See Pause and Unlock for more information.

Accessing Data on the Non-Clustered Disaster Recovery Node

In the event that all of your clustered nodes are unavailable (possibly due to a disaster of some sort at your primary cluster site), you may need to be able to access the data that has been replicated to a DR Node. Use the following procedure to accomplish this.

Note: Please refer to Switching Over a Mirror guidelines.

Option 1 – using the DataKeeper GUI

  1. Start the DataKeeper GUI and connect to the DR Node.
  1. Choose the Job that contains the mirror to be made accessible on the DR Node.
  1. Choose “Switchover Mirror” to make the DR Node the mirror source, and to make the data on that node accessible.

Option 2 – using EMCMD

On the DR Node, start a command prompt and run the commands:

  1. cd %ExtMirrBase%
  1. EMCMD . SWITCHOVERVOLUME <volume letter>

Repeat these actions for all volumes that you need to access on the DR Node. DataKeeper will keep track of all changes that occur while the volume is accessible on this node, and will automatically resync these changes to the cluster nodes when they are brought back up and are accessible from the DR Node. However, the volume resources will not automatically come Online in the cluster – manual steps as outlined in the next section must be taken to move the DataKeeper volumes back into the cluster.

Restoring Data Access to the Cluster

When a cluster node is powered back up after a failure, there are several states that its mirrors can be put into, depending on the mirror state at the time of the original failure, the current network conditions, and the state of other nodes in the cluster. The volume may be in the Source, Target, or None role after all cluster nodes have been restored. You should use the DataKeeper GUI on any of the cluster nodes to determine the mirror role, and to resolve any possible Split-Brain conditions that may exist. See Split Brain Issue and Recovery for more information. If you are resolving Split-Brain, you should choose the DR Node to be the node that remains source, since it is the one that has the most up-to-date data.

As long as the DR Node is accessible from the cluster node, and the non-cluster node’s mirror is in the Source role, any Online request on that cluster node will fail.

Steps to bring the clustered DataKeeper Volume resource back Online

In order to bring the DataKeeper Volume resource Online on a cluster node, the mirror must be switched over to the cluster node where that volume was last Online before the outage (the “Last Source” node for that volume), and the DR Node’s volume must be made a Target of the clustered volume. At that point, the DataKeeper Volume resource can be brought Online on the cluster node.

To determine which cluster node is the Last Source node for a particular volume, run either of the following commands on any of the cluster nodes:

  • (to use cluster.exe) – cluster res “<DataKeeper Volume Resource name>” -priv
  • (to use powershell) – get-clusterresource -Name “<DataKeeper Volume Resource name>” | get-clusterparameter
    The output produced should include a line with the value “LastSource” listed. The name of the Last Source node is given on that line.

Follow these steps to bring the resource Online:

  1. If any DataKeeper Volume resources are already Online in the cluster, take them Offline. This is necessary in order to resolve Split Brain conditions in the next step.
  1. Start the DataKeeper GUI on one of the cluster nodes. Resolve any Split Brain conditions, choosing the DR Node as the mirror source.
  1. Monitor the state of the mirrors that have been created from the DR Node (Mirror Source) to the clustered nodes (Targets). If any clustered nodes are shared, only one of them will be a mirror target.
  1. Once the mirror from the DR Node (Mirror Source) to the LastSource cluster node has reached the Mirroring state, the LastSource node can be made Source.

a. Open a command prompt on that cluster node

b. Run the command cd %ExtMirrBase%

c. Run the command EMCMD . SWITCHOVERVOLUME <volume letter>

Repeat these steps for each volume. If multiple volumes are part of the same resource group, be sure to switch each of them over to their Last Source node.

Then, using Failover Cluster Manager, bring the volumes and associated applications / roles Online.


Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment