After you have created a DRBD hierarchy, you should extend that hierarchy to another server in the cluster.

  1. There are 3 ways to begin
    1. After you successfully create your DRBD resource hierarchy click Next to proceed with extending your resource hierarchy to your backup server beginning with step 2.
    2. Right-click on an in-service resource in either the left or right pane on the LifeKeeper GUI and select Extend Resource Hierarchy beginning with step 2.
    3. Select Edit > Resource > Extend Resource Hierarchy
      1. The Template Server dialog will appear.

Select the name of the server where the DRBD resource exists that you want to be extended. All servers in your cluster are included in the dropdown list box.

Click Next to continue.

ii. The Tag to Extend dialog will appear.

Select the name of the tag on the template server you wish to extend. All resources configured on the template server are included in the dropdown list box.

Click Next to continue with step 2.

  1. The Target Server dialog will appear.

The Accept Defaults button that is available is intended for the user who is familiar with the LifeKeeper Pre-Extend Resource Hierarchy defaults and wants to quickly extend a LifeKeeper resource hierarchy without being prompted for input or confirmation. Users who prefer to extend a LifeKeeper resource hierarchy using the interactive, step-by-step interface of the GUI dialogs should use the Next button.

  1. The Switchback Type dialog will appear.

Select either intelligent or automatic. This dictates how the DRBD resource will be switched back to this server when the server comes back up after a failover. The switchback type can be changed later from the General tab of the Resource Properties dialog box. An intelligent switchback (recommended) means that after a failover to the backup server, an administrator must manually switch the DRBD resource back to the primary server. An automatic switchback means that after a failover to the backup server the DRBD resource will automatically switch back to the primary server.

Click Next to continue.

  1. The Template Priority dialog will appear.

Select or enter a Template Priority. This is the priority for the DRBD hierarchy on the server where it is currently in service. Any unused priority value from 1 to 999 is valid, where a lower number means a higher priority (1=highest). The extend process will reject any priority for this hierarchy that is already in use by another system. The default value is recommended.

Click Next to continue.

  1. The Target Priority dialog will appear.

This is the priority for the new extended DRBD hierarchy relative to equivalent hierarchies on other servers. Any unused priority value from 1 to 999 is valid, indicating a server’s priority in the cascading failover sequence for the resource. A lower number means a higher priority (1=highest). Note that LifeKeeper assigns the number “1” to the server on which the hierarchy is created by default. The priorities need not be consecutive, but no two servers can have the same priority for a given resource.

Click Next to continue.

  1. This completes the information for the Pre-Extend Wizard. The data given will be verified and determined if the extend Wizard can continue.

The Accept Defaults button that is available is intended for the user who is familiar with the File System and DRBD Extend Resource Hierarchy defaults and wants to quickly extend a LifeKeeper resource hierarchy without being prompted for input or confirmation. Users who prefer to extend a LifeKeeper resource hierarchy using the interactive, step-by-step interface of the GUI dialogs should use the Next button.

  1. The Mount Point on the target server dialog will appear.

Enter the name of the file system mount point on the target server.

Click Next to continue.

  1. The Root Tag dialog will appear (the file system resource tag).

This is a unique name for the filesystem resource on the target server.

Click Next to continue.

  1. The Target Disk dialog will appear.

Select from the list of disks, partitions or logical volumes in the dropdown list that are not:

• Currently mounted

• LifeKeeper-protected

• Special disks or partitions, for example, swap, root (/), boot (/boot), /proc, and cdrom

The list includes only the disks, partitions and logical volumes that are the same size or larger than the source disk.

The selected disk or partition is required to have a unique ID (GUID) found in /dev/disk/by-partuuid, /dev/disk/by-id or /dev/disk/by-uuid.

Click Next to continue.

  1. The Replication Path dialog will appear.

Select the pair of local and remote IP addresses to use for replication between the target server and the other indicated server in the cluster. The valid paths and their associated IP addresses are derived from the set of LifeKeeper communication paths that have been defined for this same pair of servers.

Click Next to continue.

  1. The Replication Type dialog will appear.

Select from the list of DRBD supported replication types:

a. Asynchronous replication protocol (A). Local write operations on the primary server are considered completed as soon as the local disk write has finished and the replication packet has been placed in the local TCP send buffer. In the event of a forced failover, data loss may occur.

b. Memory Synchronous replication protocol (B). Local write operations on the primary server are considered completed as soon as the local disk write has occurred and the replication packet has reached the peer server. Normally, no writes are lost in case of forced failover.

c. Synchronous replication protocol (C). Local write operations on the primary server are considered completed only after both the local and the remote disk write(s) have been confirmed. As a result, loss of a single server is guaranteed not to lead to any data loss.

This is the last dialog to gather information. Select Next to extend. An information box will appear showing the steps being performed to extend the hierarchy.

During the initial extend of the DRBD resource a full resync is not necessary as the internal metadata tracks all changes from the initial UUID. Note that if a hierarchy is unextended and then re-extend the UUID is cleared forcing a full resync.

The DRBD Recovery Kit supports extending to two servers. Selecting Next Server will prompt for the Pre-Extend information (prompts for Target Server, Switchback Type, and Target Priority) but will fail when it does the verification if the extend is done to a third server.

Select Finish to confirm the successful extension of your hierarchy.

Select Done to exit.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment