To ensure proper operation of the DB2 Recovery Kit in a multiple partition environment, LifeKeeper requires the following:

  1. If you cannot use an additional cluster for your NFS hierarchy, be aware that the LifeKeeper for Linux DB2 Recovery Kit restricts the occurrence of active inodes on an underlying NFS- protected file system. Therefore, to prevent this condition, we recommend that users protect the top-level directory and export the instance home directory using the fully qualified directory name. The top-level directory is protected in order to prohibit users from changing directories directly into it (i.e. cd<top level dir>).
  1. Verify the installation of IBM’s latest Fix Pack (for EEE deployments) as described in the Software Requirements section of this document.
  1. Ensure that the hostname value in your db2nodes.cfg file is the same as the value returned from issuing the hostname command.

Example:

db2nodes.cfg file:

0 server1.sc.steeleye.com 0

Additionally, the hostname value in your server’s /etc/hosts file must be the same as the hostname value in your db2nodes.cfg file.

You must also verify that your server’s /etc/hosts file contains both the local hostname and the fully qualified hostname for each server entry included in the file.

Example:

/etc/hosts file

127.0.0.1 localhost localhost.localdomain

9.21.55.53 server1.sc.steeleye.com server1

  1. During execution of the db2setup script, do not opt to create the DB2 Warehouse Control Database (DWCNTRL) or the DB2 Sample Database at this time. The databases need to be created on a shared file system to ensure successful creation of the DB2 resource hierarchy. Electing to create either of these databases during execution of the db2setup script will cause the database to be created in the home directory and not on a shared file system. Users wishing to create these databases should do so external to the db2setup script in order to specify a shared file system.

In versions later than 8.1, the DB2 Tools Catalog should not be created during the setup script. This database must be placed on a shared file system and should be created after setup has completed and prior to hierarchy creation, if necessary.

  1. Active/Active or multiple partition server environments, each server in the configuration must be capable of running all database instances in a failover scenario. Please see the IBM Getting Started Guide for help determining the maximum number of DB2 instances or partition servers feasible for a given set of system resources.
  1. Select or create a shared file system, then export this file system. (i.e /export/db2home). The file system will be used as the DB2 instance home. (Note: one exception from this requirement is for partitions that will all run on the same server at all times. In that case, no NFS export is necessary, and the instance homes can simply be located on shared storage.)
  1. Protect your exported file system by creating a LifeKeeper NFS resource hierarchy. The file system should be included as a dependent resource in your NFS hierarchy.
  1. NFS mount the shared file system on each server in the cluster including the server where it is being exported. See the DB2 Quickstart Guide for mount options. When creating the DB2 instance, the home directory of the instance must be located on the NFS mounted file system. Make certain that the file system is mounted using the LifeKeeper protected switchable IP address used when creating the NFS hierarchy. Additionally, the mount point of the home directory must be specified in the /etc/fstab file on all servers in your LifeKeeper cluster. Each server in your configuration must have the file system mounted on identical mount points (i.e. /db2/home).

Note: We recommend that you create and test your NFS hierarchy prior to creating your DB2 resource hierarchy. Please see the NFS Recovery Kit Administration Guide for complete instructions on creating and testing a NFS hierarchy.

  1. For all servers in your configuration, set the following DB2 environment variable to equal the total number of partitions in the instance. To set this variable, log on to the server as the instance owner and issue a db2set command. Adjusting this variable will accommodate all conceivable failover scenarios.

db2setDB2_NUM_FAILOVER_NODES=<partitions in the instance>

  1. Update your existing DB2 instances and your DB2 Administration servers using the following DB2 utilities:

db2iupdt and dasiupdt

  1. A LifeKeeper DB2 hierarchy must be created on each server in the cluster that has a database partition server managing data for the instance. The databases and tablespaces must be on a shared file system. A separate LUN is required for each database partition server and for the NFS exported home directory. Dependent resources include the file systems where actual databases and tablespaces are located.
  1. If you create a database on a non-protected LifeKeeper file system after the creation of your DB2 hierarchy, you will need to create a resource hierarchy for that file system and make it a dependency of your DB2 resource hierarchy. The hierarchy will protect all of the partition servers that the db2node.cfg file indicates should run on the server.
  1. To ensure proper execution of a failover, it is imperative that the file system of each database partition server is uniquely numbered.

Example:

The mount point for your database partition server node0 should be:

/<FSROOT>/<db2instancename>/NODE0000

The mount point for your database partition server node1 should be:

/<FSROOT>/<db2instancename>/NODE0001

Note: In this example there are two partition servers, and the file system for each is mounted on a separate LUN.

  1. All database partition servers for a given machine must be running in order to assure the successful creation of your DB2 hierarchy.
  1. When the database partition server becomes inoperable on the primary system, the service fails over to a previously defined backup system. The database service on the backup system becomes available immediately after the dependent resources fail over and the database partition server(s) is brought into service. Previously connected DB2 clients are disconnected and must reconnect to the functioning server. Any uncommitted SQL statements are rolled back and should be re-entered.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment