This operation should be started on the Primary Server to the Secondary Server from the Edit menu or initiated automatically upon completing the Create Resource Hierarchy option in which case you should refer to Step 2 below.
- On the Edit menu, select Resource then Extend Resource Hierarchy. The Pre-Extend Wizard appears. If you are unfamiliar with the Extend operation, click Next. If you are familiar with the LifeKeeper Extend Resource Hierarchy defaults and want to bypass the prompts for input/confirmation, click Accept Defaults.
- The Pre-Extend Wizard will prompt you to enter the following information.
|Template Server|| Select the Template Server where your DataKeeper resource hierarchy is currently in service. It is important to remember that the Template Server you select now and the Tag to Extend that you select in the next dialog box represent an in-service (activated) resource hierarchy.
An error message will appear if you select a resource tag that is not in service on the template server you have selected. The drop down box in this dialog provides the names of all the servers in your cluster.
|Tag to Extend||This is the name of the DataKeeper instance you wish to extend from the template server to the target server. The drop down box will list all the resources that you have created on the template server.|
|Target Server||Enter or select the server you are extending to.|
|Switchback Type|| You must select intelligent switchback. This means that after a failover to the backup server, an administrator must manually switch the DataKeeper resource back to the primary server.
CAUTION: This release of SIOS DataKeeper does not support Automatic Switchback for DataKeeper resources. Additionally, the Automatic Switchback restriction is applicable for any other LifeKeeper resource sitting on top of a DataKeeper resource.
|Template Priority|| Select or enter a Template Priority. This is the priority for the DataKeeper hierarchy on the server where it is currently in service. Any unused priority value from 1 to 999 is valid, where a lower number means a higher priority (1=highest). The extend process will reject any priority for this hierarchy that is already in use by another system. The default value is recommended.
Note: This selection will appear only for the initial extend of the hierarchy.
|Target Priority||Select or enter the Target Priority. This is the priority for the new extended DataKeeper hierarchy relative to equivalent hierarchies on other servers. Any unused priority value from 1 to 999 is valid, indicating a server’s priority in the cascading failover sequence for the resource. A lower number means a higher priority (1=highest). Note that LifeKeeper assigns the number “1” to the server on which the hierarchy is created by default. The priorities need not be consecutive, but no two servers can have the same priority for a given resource.|
After receiving the message that the pre-extend checks were successful, click Next.
Depending upon the hierarchy being extended, LifeKeeper will display a series of information boxes showing the Resource Tags to be extended, some of which cannot be edited.
- Click Next to launch the Extend Resource Hierarchy configuration task.
- The next section lists the steps required to complete the extension of a DataKeeper resource to another server.
Extending a DataKeeper Resource
- After you have been notified that your pre-extend script has executed successfully, you will be prompted for the following information:
|Mount Point||Enter the name of the file system mount point on the target server. (This dialog will not appear if there is no LifeKeeper-protected filesystem associated with the DataKeeper Resource.)|
|Root Tag||Select or enter the Root Tag. This is a unique name for the filesystem resource instance on the target server. (This dialog will not appear if there is no LifeKeeper-protected filesystem associated with the DataKeeper Resource.)|
|Target Disk or Partition||
Select the disk or partition where the replicated file system will be located on the target server.
The list of disks or partitions in the drop down box contains all the available disks or partitions that are :
The drop down list will also filter out special disks or partitions, for example, root (/), boot (/boot), /proc, floppy and cdrom.
Note: The size of the target disk or partition must be greater than or equal to that of the source disk or partition.
|DataKeeper Resource Tag||Select or enter the DataKeeper Resource Tag name.|
|Bitmap File||Select the name of the bitmap file used for intent logging. If you choose None, then an intent log will not be used and every resynchronization will be a full resync instead of a partial resync.|
Select the pair of local and remote IP addresses to use for replication between the target server and the other indicated server in the cluster. The valid paths and their associated IP addresses are derived from the set of LifeKeeper communication paths that have been defined for this same pair of servers. Due to the nature of DataKeeper, it is strongly recommended that you use a private (dedicated) network.
If the DataKeeper Resource has previously been extended to one or more target servers, the extension to an additional server will loop through each of the pairings of the new target server with existing servers, prompting for a Replication Path for each pair.
Choose “synchronous” or “asynchronous” to indicate the type of replication that should be used between the indicated pair of servers.
As for the previous Replication Path field, if the DataKeeper Resource has previously been extended to one or more target servers, the extension to an additional server will loop through each of the pairings of the new target server with existing servers, prompting for a Replication Type for each pair.
- Click Next to continue. An information box will appear verifying that the extension is being performed.
- Click Finish to confirm the successful extension of your DataKeeper resource instance.
- Click Done to exit the Extend Resources Hierarchy menu selection.
During resynchronization, the DataKeeper resource and any resource that depends on it will not be able to fail over. This is to avoid data corruption.