This document will guide you through the installation of the SIOS Protection Suite for Linux (SPS) and assumes the user has basic knowledge of the Linux operating system. Please refer to the SIOS Protection Suite for Linux product documentation for more information.

Pre-Installation Requirements

Before installing SPS for Linux, please check the following:

  • SPS for Linux Release Notes -The Release Notes include supported platforms, operating systems, applications, and storage. They also include the latest features and Bug Fixes.
  • TCP/IP Connection and Name Resolution – In order to use the GUI function, both cluster nodes need to be able to resolve the name. Use the DNS service or /etc/hosts for name resolution. Also, localhost needs to be resolved to 127.0.0.1.
  • Firewall – The following ports are used:
    • Communication Path (TCP): 7365/tcp
    • Communication of a GUI Server: 81/tcp、82/tcp
    • RMI Communication between the GUI Server and Client: all the ports after 1024/tcp
    • Synchronization of DataKeeper (when using DataKeeper): “10001+<mirror number>+<256 * i>”
More Firewall Information
  • The port used for communication with the GUI server and a client needs to be open on the cluster node where SPS is installed and on all systems where the GUI client runs.
  • The ports used by DataKeeper can be calculated using the formula above. The value of i starts at 0 and uses an unused port when found. For example, in an environment where a DataKeeper resource with mirror number 0 exists, if port 10001 is being used by another application, port 10257 will be used.
  • For communication between the GUI server and a client, Java RMI (Remote Method Invocation) randomly uses ports 1024 and above. When applying access control etc. to a cluster system, packet filtering needs to be performed considering these ports. If this specification is an issue from a security standpoint, you can use ssh X forwarding. Please refer to the Technical Documentation for the setting details.
    • Check the SELinux Setting – When the SELinux setting is enabled, SPS for Linux cannot be installed. Please refer to the OS distribution documentation on how to disable SELinux. It is not recommended to use SELinux permissive mode unless it is required in an SAP environment. Please make sure that the application to be run on the cluster supports a permissive mode. SELinux permissive mode has been tested for following ARKs: SAP / SAP MaxDB / Sybase / Oracle / DB2 / NFS / DataKeeper / NAS / EC2 / IP / FileSystem / MQ
      Refer to Linux Dependencies for required packages.
      • Install the appropriate package provided by your distribution.
      • The sg3_utils package is required for environments using recovery kits for Multipath such as the DMMP Recovery Kit and the PowerPath Recovery Kit. This is not required for environments where recovery kits for Multipath are not used.
    • Check Known Issues – Please make sure that there are no known issues for your environment.

Installing SPS for Linux

Install the SPS software on each server in the SPS configuration.

Packages that LifeKeeper is dependent on are installed automatically because the LifeKeeper installation setup script uses package manager tools (yum or zypper) to ensure installation of all dependent packages.

The SPS for Linux image file (sps.img) provides a set of installation scripts designed to perform the user interactive system setup tasks that are necessary when installing SPS on your system.

A licensing utilities package is also installed providing utilities for obtaining and displaying the Host ID or Entitlement ID of your server. Host IDs and/or Entitlement IDs are used to obtain valid licenses for running SPS.

Obtaining and Installing the License

SPS for Linux requires a unique license for each server. The license is a run-time license, which means that you can install SPS without it, but the license must be installed before you can successfully start and run the product.


Note: If using newer hardware with RHEL 6.1, please see the IP Licensing Known Issues in the SPS for Linux Troubleshooting Section.


The Installation script installs the Licensing Utilities package which obtains and displays all of the available Host IDs for your server during the initial install of your SIOS Protection Suite Software. Once your licenses have been installed the utility will return the Entitlement ID if it is available or the Host IDs if it is not.


Note: Host IDs, if displayed will always be based on the MAC address of the NICs.


The new licenses obtained from the SIOS Technology Corp. Licensing Operations Portal will contain your Entitlement ID and will be locked to a specific node or IP address in the cluster. The Entitlement ID (Authorization Code) which was provided with your SIOS Protection Suite Software, is used to obtain the permanent license required to run the SIOS Protection Suite Software. The process is illustrated below.



Note: Each software package requires a license for each server.


Perform the following steps to obtain and install your license(s) for each server in the SPS cluster:

  1. Ensure you have your LifeKeeper Entitlement ID (Authorization Code). You should have received an email with your software containing the Entitlement ID needed to obtain the license.
  2. Obtain your licenses from the SIOS Technology Corp. Licensing Operations Portal.

    • a. Using the system that has internet access, log in to the
    SIOS Technology Corp. Licensing Operations Portal.
    b. From the Activation & Entitlements dropdown list select List Entitlements.
    Note: If changing password, use the Profile button in the upper right corner of the display.

    • c. Find your
    Entitlement ID and select each Activation ID associated with that Entitlement ID by checking the box to the left of the line item.
    d. From the Action dropdown list select Activate.
    e. Define the required fields and select Next.
    f. Click on the Green Plus Sign to add a new host.
    g. Select and Define the required fields and click Okay. (Note: Internet = IP address, Ethernet = MAC address)
    h. Check the box to the left of the Host ID or IP address and select Generate. The Fulfillment ID will display on the License Summary screen.
    i. Select Complete.
    j. Check the box to the left of the Fulfillment ID and select the Email from the View dropdown list.
    k. Enter a valid email address to send the license to and select Send.
    l. Retrieve the email(s).
    m. Copy the file(s) to a temporary directory on each node. Make sure that the licenses match the MAC address. This path and filename(s) will be used during the ‘Install License Key’ portion of the setup script.

How to Install / Upgrade SPS Using the Setup Script

To install or upgrade SPS, follow the steps below.

Interactive Mode

  1. After logging in as the root user, use the following command to mount the sps.img file:

mount <PATH/IMAGE_NAME> <MOUNT_POINT> -t iso9660 -o loop

Where PATH is the path to the image
IMAGE_NAME is the name of the image
MOUNT_POINT is the path to mount location

  1. Change to the directory where sps.img is mounted and enter the following:

./setup

  1. The script collects information about the system environment and determines what you need to do to install SPS.

If the system requirements for installation or upgrade are not satisfied, then an error message is displayed and the installation / upgrade is cancelled.

Also, if some restrictions arise or a configuration change is required, a warning message will be displayed requiring the user to decide whether to continue or abort the installation.

  1. Select the SPS features and Application Recovery Kits (ARKs) to install via the main dialog screen.

How to Use the Dialog Screen

If the kernel version is not supported for asynchronous mirroring the following dialog will appear.

The dialog screen is displayed below.

Use the following keys to navigate the menu.

↑ ↓ Navigate between menu items
← → Navigate between the menu buttons at the bottom of the screen
ENTER Open the selected sub menu
Y / N / SPACE Turn on, turn off or invert the selected item

The menu buttons at the bottom of the screen are used for the following operations.

Select Opens a sub menu dialog screen
Done Closes the current screen and returns to the previous screen. Selecting this button on the main screen completes the configuration.
Help Displays help for the highlighted item
Save Saves the current settings in a configuration file. The saved configuration file can be used for non-interactive installations.
Load Loads a saved configuration file

The “Save” and “Load” menu buttons display a dialog screen asking for a configuration file name for use in saving the current configuration or for loading a saved configuration. If you want to change the default file name provided, move to the file name field using the [TAB] key, and enter a new name. Note: The “Save” operation will prompt for confirmation before overwriting a file with the same name.

The items listed below are configurable during installation. During an upgrade only items that can be configured are listed. Using the hotkey <Z> will show those items that will remain unchanged during the upgrade.

  • Install Java Runtime (JRE)

Install the Java runtime environment used by the LifeKeeper GUI.

  • Restart NFS Service

When configuring High Availability NFS, restarting the NFS services is required. When this is selected, the services are restarted automatically after the configuration is completed.

Note: If you do not want to restart the NFS services automatically, a restart will need to be done to pick up the configuration changes before using the NFS Recovery Kit.

  • Use Quorum / Witness Functions

Use Quorum / Witness for I/O fencing. For details, please refer to Quorum/Witness in the technical documentation.

The Quorum/Witness Server Support Package for LifeKeeper will need to be installed on every node in the cluster that uses quorum/witness functionality, including a witness-only node. The only configuration requirement for the witness node is to create appropriate comm paths. When using a quorum mode with tcp_remote, LifeKeeper does not need to be installed on the host which was set as QUORUM_MODE in /etc/default/LifeKeeper configuration file.

The general process for setting up quorum/witness functionality will involve the following steps:


  1. Set up the server and make sure that it can communicate with other servers.
  2. Install LifeKeeper on the server. During the installation, enable “Use Quorum / Witness functions” with the setup command and install the quorum/witness package as well.
  3. Create appropriate communication paths between the nodes including witness-only nodes.
  4. Configure quorum/witness.

When the above steps are completed, the quorum/witness functions will be activated in the cluster and quorum checking and witness checking will be performed before failovers are allowed.

  • LifeKeeper Authentication

Specify the users allowed to log in to the SPS for Linux GUI along with their privilege levels. Multiple user accounts can be specified by separating them with blanks. For details, refer to GUI User Settings.

  • Install License Key File(s)

Install the licenses required to start SPS for Linux by entering the path name of the license file to install. Multiple files can be specified by separating them with spaces.

  • Recovery Kit Selection

Select the Application Recovery Kits to install.

Application Recovery Kits are broken into several categories based on common functionality.

Please refer to Categories for Application Recovery Kits for details.

  • LifeKeeper Startup After Install

When selected, SPS for Linux will be started when the installation is completed.

Categories for Application Recovery Kits

Category
Description
Application Suite A group of recovery kits that protect applications such as SAP and IBM MQ.
Networking A group of recovery kits that protect network services in the cloud such as EC2 and Route53.
Database A group of recovery kits that protect database applications, including, but not limited to, Oracle, PostgreSQL, and MaxDB.
File Sharing A group of recovery kits that protect file sharing services such as NFS and Samba.
Mail Server A group of recovery kits that protect email services such as Postfix.
Storage A group of recovery kits that protect data storage methods, including, but not limited to, DataKeeper (replication), Device Mapper (DM) Multipath (DMMP), and Network Attached Storage (NAS).
Web Server A group of recovery kits that protect web services such as Apache.
  1. Once all the required SPS features and ARKs have been selected, select <Done> to begin the installation.

If any notifications are output when the installation completes, please take the necessary actions to correct them.

Creating a Cluster

To create a cluster system first you need set up a “communication path” between the nodes that make up a HA cluster. Then create “resources” to define what to protect.

Connecting with LifeKeeper GUI Client

Configure LifeKeeper using the GUIs.

The GUI client is started by lkGUIapp command. After starting LifeKeeper, start the LifeKeeper GUI client with the following command.

After executing the command, the GUI client is started and the login screen is launched. Server Name is the name of the server you are running. For login username and password, enter the LifeKeeper admin user name and password. By default, the operating system super user (root) and its password are used for the admin user.

After successfully logging in the following screen is displayed.

Note: You can also access the GUI from a remote host via a web browser. When using a web browser, names should be resolved between the remote host you want to access and the cluster server. Use Port 81 for a web browser. Access from the remote host by entering http://host name:81 or http://IP address:81. Please note that there are several requirements for using the LifeKeeper GUI with a browser. Refer to the Release Notes and the Technical Documentation for details. Due to this requirement, browser operation may be difficult depending on the network environment. In such cases, consider using remote desktop and SSH X forwarding.

Creating a Communication Path

To create a communication path between a pair of servers, you must define the path individually on both servers. LifeKeeper allows you to create both TCP (TCP/IP) and TTY communication paths between a pair of servers. Only one TTY path can be created between a given pair. However, you can create multiple TCP communication paths between a pair of servers by specifying the local and remote addresses that are to be the end-points of the path. A priority value is used to tell LifeKeeper the order in which TCP paths to a given remote server should be used.

  1. On the global toolbar, click the Create Comm Path button.
  1. A dialog entitled Create Comm Path will appear. For each of the options that follow, click Help for an explanation of each choice.
  1. Select the Local Server from the list box and click Next.
  1. Select one or more Remote Servers in the list box. If a remote server is not listed in the list box (i.e. it is not yet connected to the cluster), you may enter it using Add. You must make sure that the network addresses for both the local and remote servers are resolvable (for example, with DNS or added to the /etc/hosts file). Click Next.
  1. Select either TCP or TTY for Device Type and click Next.
  1. Select one or more Local IP Addresses if the Device Type was set for TCP. Select the Local TTY Device if the Device Type was set to TTY. Click Next.
  1. Select the Remote IP Address if the Device Type was set for TCP. Select the Remote TTY Device if the Device Type was set to TTY. Click Next.
  1. Enter or select the Priority for this comm path if the Device Type was set for TCP. Enter or select the Baud Rate for this Comm Path if the Device Type was set to TTY. Click Next.
  1. Click Create. A message should be displayed indicating the network connection is successfully created. Click Next.
  1. If you selected multiple Local IP Addresses or multiple Remote Servers and the Device Type was set for TCP, then you will be taken back to Step 6 to continue with the next Comm Path. If you selected multiple Remote Servers and the Device Type was set for TTY, then you will be taken back to Step 5 to continue with the next Comm Path.
  1. Click Done when presented with the concluding message.

You can verify the comm path by viewing the Server Properties Dialog or by entering the command lcdstatus -q. See the LCD man page for information on using lcdstatus. You should see an ALIVE status.

In addition, check the server icon in the right pane of the GUI. If this is the first comm path that has been created, the server icon shows a yellow heartbeat, indicating that one comm path is ALIVE, but there is no redundant comm path.

The server icon will display a green heartbeat when there are at least two comm paths ALIVE.

Creating Resource Hierarchies

Create resources for the services and applications you want to protect.

  1. On the global toolbar, click on the Create Resource Hierarchy button.
  1. A dialog entitled Create Resource Hierarchy will appear with a list of all recognized recovery kits installed within the cluster. Select the Recovery Kit that builds resource hierarchies to protect your application and click Next.
  1. Select the Switchback Type and click Next.
  1. Select the Server and click Next. Note: If you began from the server context menu, the server will be determined automatically from the server icon that you clicked on, and this step will be skipped.
  1. Continue through the succeeding dialogs, entering whatever data is needed for the type of resource hierarchy that you are creating.

Recovery Kit Options

Each optional recovery kit that you install adds entries to the Select Recovery Kit list; for example, you may see Oracle, Apache, and NFS Recovery Kits. Refer to the Administration Guide that accompanies each recovery kit for directions on creating the required resource hierarchies.

Creating a File System Resource Hierarchy

  1. On the global toolbar, click on the Create Resource Hierarchy button.
  1. A dialog entitled Create Resource Wizard will appear with a Recovery Kit list. Select File System Resource and click Next.
  1. Select the Switchback Type and click Next.
  1. Select the Server and click Next. Note: If you began from the server context menu, the server will be determined automatically from the server icon that you clicked on, and this step will be skipped.
  1. The Create gen/filesys Resource dialog will now appear. Select the Mount Point for the file system resource hierarchy and click Next. The selected mount point will be checked to see that it is shared with another server in the cluster by checking each storage kit to see if it recognizes the mounted device as shared. If no storage kit recognizes the mounted device, then an error dialog will be presented:

<file system> is not a shared file system

Selecting OK will return to the Create gen/filesys Resource dialog.

Notes:

    • In order for a mount point to appear in the choice list, the mount point must be currently mounted. If an entry for the mount point exists in the /etc/fstab file, LifeKeeper will remove this entry during the creation and extension of the hierarchy. It is advisable to make a backup of /etc/fstab prior to using the NAS Recovery Kit, especially if you have complex mount settings. You can direct that entries are re-populated back into /etc/fstab on deletion by setting the /etc/default/LifeKeeper tunable REPLACEFSTAB=true|TRUE.
    • Many of these resources (SIOS DataKeeper, LVM, Device Mapper Multipath, etc.) require LifeKeeper recovery kits on each server in the cluster in order for the file system resource to be created. If these kits are not properly installed, then the file system will not appear to be shared in the cluster.
  1. LifeKeeper creates a default Root Tag for the file system resource hierarchy. (This is the label used for this resource in the status display). You can select this root tag or create your own, then click Next.
  1. Click Create Instance. A window will display a message indicating the status of the instance creation.
  1. Click Next. A window will display a message that the file system hierarchy has been created successfully.
  1. At this point, you can click Continue to move on to extending the file system resource hierarchy, or you can click Cancel to return to the GUI. If you click Cancel, you will receive a warning message that your hierarchy exists on only one server, and it is not protected at this point.

Frequently Used Commands

  • Starting the LifeKeeper GUI client
    # /opt/LifeKeeper/bin/lkGUIapp
  • Starting LifeKeeper
    # /opt/LifeKeeper/bin/lkstart
  • Stopping LifeKeeper (stopping resources)
    # /opt/LifeKeeper/bin/lkstop
  • Stopping LifeKeeper (without stopping resources)
    # /opt/LifeKeeper/bin/lkstop –f
  • Checking a status of LifeKeeper
    Specify “-e” option to display the simple status
    # /opt/LifeKeeper/bin/lcdstatus (or lcdstatus –e)
  • Checking a LifeKeeper log
    Refer to /var/log/lifekeeper.log. If you want to check the log output in real time, you can also use the tail command as follows.
    # tail –f /var/log/lifekeeper.log
  • Collect LifeKeeper Configuration Information and Logs together
    # /opt/LifeKeeper/bin/lksupport
  • Backup/Restore of LifeKeeper Configuration Information
    Taking a backup of the LifeKeeper configuration information
    # /opt/LifeKeeper/bin/lkbackup –c
  • Restoring the LifeKeeper configuration information
    # /opt/LifeKeeper/bin/lkbackup –x –f archive..tar.gz

Support for SPS for Linux

Contact SIOS Technology Corp. Support at support@us.sios.com

You can also contact SIOS Technology Corp. Support at:

  • 1-877-457-5113 (Toll Free)
  • 1-803-808-4270 (International)

Email: support@us.sios.com

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment