The section Configuring WebSphere MQ for Use with LifeKeeper contains a process for protecting a queue manager with LifeKeeper. In general, the following requirements must be met to successfully configure a WebSphere MQ queue manager with LifeKeeper:

  1. Configure Kernel Parameters. Please refer to the WebSphere MQ documentation for information on how Linux kernel parameters such as shared memory and other kernel resources should be configured.
  1. MQUSER and MQGROUP. The MQGROUP and the MQUSER must exist on all servers in the LifeKeeper cluster. Websphere MQ software requires that the MQGROUP mqm exist and that it also have the MQUSER mqm defined that has its primary group membership set to the MQGROUP mqm. If the mqm user and mqm group do not exist at the time the Websphere MQ software is installed they will be automatically created. When installing the WebSphere MQ software most of the files and directories will have their user and group ownership set to the mqm user and mqm group. User and group ownership of the files and directories in the Queue Manager data and log directories will also be set to the mqm user and mqm group. Additionally, when a Queue Manager is started it will run as the mqm user. Therefore, the MQUSER user id (uid) and the MQGROUP group id (gid) must be the same on all servers in the cluster. The MQ Recovery Kit will verify this when attempting to extend the resource. If they do not match the resource extension will fail. Note: If you are using NIS, LDAP or another authentication tool besides the local password and group files you need to set up the MQUSER and MQGROUP prior to the installation of the Websphere MQ and LifeKeeper software. You may also need to create a home directory. If this is an upgrade from a prior release of the WebSphere MQ Recover Kit, then the MQUSER PATH environment variable setting may need to be modified. In prior releases of the Recovery Kit the MQUSER PATH environment variable needed to be modified to include the default install location of the WebSphere MQ software, /opt/mqm. If that change was made in a prior release it must be unset for this version of the Recovery Kit to function correctly.
  1. Alternate MQ user support. Although the Websphere MQ software will always run as the mqm user an alternate user name can be specified for running all MQ commands provided the alternate user has primary or secondary membership in the mqm group. An alternate user name for starting WebSphere MQ may be required when integrating with other MQ Tools. To change to an alternate user see the MQS_ALT_USER_NAME tunable in the “Changing LifeKeeper WebSphere MQ Recovery Kit Defaults” section of this document.
  1. Manual command server startup. If you want to have LifeKeeper start the command server, disable the automatic command server startup using the following command on the primary server. Otherwise, the startup of the command server will be performed automatically when the Queue Manager is started:

runmqsc QUEUE.MANAGER.NAME

ALTER QMGR SCMDSERV(MANUAL)

  1. QMDIR and QMLOGDIR must be located on shared storage. The queue manager directory QMDIR and the queue manager log directory QMLOGDIR must be located on LifeKeeper-supported shared storage to let the WebSphere MQ on the backup server access the data. See Supported File System Layouts for further details.
  1. QMDIR and QMLOGDIR permissions. The QMDIR and QMLOGDIR directories must be owned by MQUSER and the group MQGROUP. The ARK dynamically determines the MQUSER by looking at the owner of this directory. It also detects symbolic links and follows them to the final targets. Use the system command chown to change the owner of these directories if required.
  1. Disable Automatic Startup of Queue Manager. If you are using an init script to start and stop WebSphere MQ, disable it for the queue manager(s) protected by LifeKeeper. To disable the init script, use the operating system provided functions like insserv on SuSE or chkconfig on Red Hat.
  1. Server Connection Channel Authorization. Beginning with WebSphere MQ version 7.1 changes were made to channel authorization. By default the MQADMIN user (mqm) is unable to authenticate anonymously (no password) thus failing the resource hierarchy create (authorization for queue managers created with a WebSphere MQ release prior to 7.1 should continue to work). Starting with WebSphere MQ 7.1 one method to allow authorization for the MQADMIN user is to disable channel authorization. For WebSphere MQ 8.0 additional changes are required to the authinfo for system.default.authinfo.idpwos (in runmqsc run ‘display authinfo(system.default.authinfo.idpwos)’ to retrieve the current settings). The chckclnt setting of ‘reqdamd’ must altered and set to ‘optional’. Failure to allow the MQADMIN user anonymous authorization will result in the following error: ‘MQCONNX ended with reason code 2035’ during resource creation. See the WebSphere MQ documentation for details on how to create channels.
  1. MQSeriesSamples, MQSeriesSDK and MQSeriesClient Package. LifeKeeper uses a client connection to WebSphere MQ to verify that the listener and the channel initiator are fully functional. This is a requirement for remote queue managers and clients to connect to the queue manager. Therefore, the MQSeriesClient package must be installed on all LifeKeeper cluster nodes running WebSphere MQ. Also, the MQSeriesSDK and MQSeriesSamples packages must be installed to perform client connect tests and PUT/GET tests.
  1. Optional C Compiler. For the optional PUT/GET tests to take place, a C compiler must be installed on the machine. If not, a warning is issued during the installation.
  1. LifeKeeper Test Queue. The WebSphere MQ Recovery Kit optionally performs a PUT/GET test to verify queue manager operation. A dedicated test queue has to be created because the recovery kit retrieves all messages from this queue and discards them. This queue should have set the default persistency setting to “yes(DEFPSIST=yes) When you protect a queue manager in LifeKeeper, a test queue named “LIFEKEEPER.TESTQUEUE” will be automatically created. You can also use the following command to create the test queue manually before protecting the queue manager:

su – MQUSER
runmqsc QUEUE.MANAGER.NAME

define qlocal(LIFEKEEPER.TESTQUEUE) DEFPSIST(YES) DESCR(‘LifeKeeper test queue’)

Note: If you want to use a name for the LifeKeeper test queue other than the default “LIFEKEEPER.TESTQUEUE”, the name of this test queue must be configured. See Editing Configuration Resource Properties for details.

  1. TCP Port for Listener Object. Alter the Listener object via runmqsc to reflect the TCP port in use. Use the following command to change the TCP port of the default Listener:

su – MQUSER
runmqsc QUEUE.MANAGER.NAME

alter LISTENER(SYSTEM.DEFAULT.LISTENER.TCP) TRPTYPE(TCP) PORT(1414) IPADDR(192.168.1.100)

Note: The listener object must be altered even if using the default MQ listener TCP port 1414, but it is not necessary to set a specific IP address (IPADDR). If you skip the IPADDR setting, the listener will bind to all interfaces on the server. If you do set IPADDR, it is strongly recommended that a virtual IP resource be created in LifeKeeper using the IPADDR defined address. This ensures the IP address is available when the MQ listener is started.

  1. TCP Port Number. Each WebSphere MQ listener must use a different port (default 1414) or bind to a different virtual IP with no listener binding to all interfaces. This includes protected and unprotected queue managers within the cluster.
  1. Queue Manager configured in mqs.ini. In Active/Active configurations, each server holds its own copy of the global queue manager configuration file mqs.ini. In order to run the protected queue manager on all servers in the cluster, the queue manager must be configured in the mqs.ini configuration file of all servers in the cluster. Copy the appropriate QueueManager: stanza from the primary server and add it to the mqs.ini configuration files on all backup servers.

フィードバック

お役に立ちましたか?

はい いいえ
お役に立ちましたか
理由をお聞かせください
フィードバックありがとうございました

このトピックへフィードバック

送信