There are a number of WebSphere MQ configuration considerations that need to be made before attempting to create LifeKeeper for Linux WebSphere MQ resource hierarchies. These changes are required to enable the Recovery Kit to perform PUT/GET tests and to make the path to WebSphere MQ persistent data highly available. If the WebSphere MQ queue manager handles remote client requests via TCP/IP, a virtual IP resource must be created prior to creating the WebSphere MQ resource hierarchy. Perform the following actions to enable LifeKeeper WebSphere MQ resource creation:

  1. Plan your installation (see Appendix C).

Before installing WebSphere MQ, you must plan your installation. This includes choosing an MQUSER, MQUSER UID and MQGROUP GID. You must also decide which file system layout you want to use (see Supported File System Layouts). To ease this process, SIOS Technology Corp. provides a form that contains fields for all required information. See Appendix C – WebSphere MQ Configuration Sheet. Fill out this form to be prepared for the installation process.

  1. Configure Kernel Parameters on each server.

WebSphere MQ may require special Linux kernel parameter settings like shared memory. See the WebSphere MQ documentation for your release of WebSphere MQ for the minimum requirements to run WebSphere MQ. To make kernel parameter changes persistent across reboots, you can use the /etc/sysctl.conf configuration file. It may be necessary to add the command sysctl -p to your startup scripts (boot.local). On SuSE, you can run insserv boot.sysctl to enable the automatic setting of the parameters in the sysctl.conf file.

  1. Create the MQUSER and MQGROUP on each server.

Use the operating system commands groupadd and adduser to create the MQUSER and MQGROUP with the UID and GID from the “WebSphere MQ Configuration Sheet” you used in Step 1.

If the MQUSER you have chosen is named mgm and has UID 1002 and the MQGROUP GID is 1000, you can run the following command on each server of the cluster (change the MQUSER, UID and GID values to reflect your settings):

groupadd -g 1000 mqm
useradd -m -u 1002 -g mqm mqm

Note: These settings must be same on all nodes in the cluster. If you are running NIS or LDAP, create the user and group only once. You may need to create home directories if you have no central home directory server.

  1. Unconfigure the PATH environment variable (upgrade only).

If this is an upgrade from a prior release of the WebSphere MQ Recover Kit, then the MQUSER PATH environment variable setting may need to be modified. In prior releases of the Recovery Kit the MQUSER PATH environment variable needed to be modified to include the default install location of the WebSphere MQ software, /opt/mqm. If that change was made in a prior release it must be unset for this version of the Recovery Kit to function correctly.

  1. Install required packages to install WebSphere MQ on each server.

MQSeries installation requires the installation of X11 libraries and Java for license activation (mqlicense_lnx.sh). Install the required software packages.

  1. Install WebSphere MQ software and WebSphere MQ fix packs on each server.

Follow the steps described in the “WebSphere MQ documentation” for your release of WebSphere MQ.

  1. Server Connection Channel Authorization. Beginning with WebSphere MQ version 7.1 changes were made to channel authorization. By default the MQADMIN user (mqm) is unable to authenticate anonymously (no password) thus failing the resource hierarchy create (authorization for queue managers created with a WebSphere MQ release prior to 7.1 should continue to work). Starting with WebSphere MQ 7.1 one method to allow authorization for the MQADMIN user is to disable channel authorization. For WebSphere MQ 8.0 additional changes are required to the authinfo for system.default.authinfo.idpwos (in runmqsc run ‘display authinfo(system.default.authinfo.idpwos)’ to retrieve the current settings). The chckclnt setting of ‘reqdamd’ must altered and set to ‘optional’. Failure to allow the MQADMIN user anonymous authorization will result in the following error: ‘MQCONNX ended with reason code 2035’ during resource creation. See the WebSphere MQ documentation for details on how to authorize channels and set access permission.
  1. If MQ Version 7.1 or later is being used, enable the MQADMIN user for the specified channel within MQ for the Queue Manager being used.
  1. Install LifeKeeper and the WebSphere MQ Recovery Kit on each server.

See the LifeKeeper Installation Guide for details on how to install LifeKeeper.

  1. Prepare the shared storage and mount the shared storage.

See section Supported File System Layouts for file system layouts supported. Depending on the file system layout and the storage type, this involves creating volume groups, logical volumes, creating file systems or mounting NFS shares.

Here is an example of file system layout 2 with NAS storage:

node1:/var/mqm/qmgrs # mkdir TEST\!QM

node1:/var/mqm/qmgrs # mkdir ../log/TEST\!QM

node1:/var/mqm/qmgrs # mount 192.168.1.30:/raid5/vmware/shared_NFS/TEST.QM/qmgrs ./TEST\!QM/

node1:/var/mqm/qmgrs # mount 192.168.1.30:/raid5/vmware/shared_NFS/TEST.QM/log ../log/TEST\!QM/

  1. Set the owner and group of QMDIR and QMLOGDIR to MQUSER and MQGROUP.

The QMDIR and QMLOGDIR must be owned by MQUSER and MQGROUP. Use the following commands to set the file system rights accordingly:

chown MQUSER QMDIR
chgrp mqm QMDIR
chown MQUSER QMLOGDIR
chgrp mqm QMLOGDIR

The values of MQUSER, QMDIR and QMLOGDIR depend on your file system layout and the user name of your MQUSER. Use the sheet from Step 1 to determine the correct values for the fields.

Here is an example for MQUSER mqm and queue managerTEST.QM with default QMDIR and QMLOGDIR destinations:

node1:/var/mqm/qmgrs # chown mqm TEST\!QM/
node1:/var/mqm/qmgrs # chgrp mqm TEST\!QM/
node1:/var/mqm/qmgrs # chown mqm ../log/TEST\!QM/
node1:/var/mqm/qmgrs # chgrp mqm ../log/TEST\!QM/

  1. Create the queue manager on the primary server.

Follow the steps described in the WebSphere MQ documentation for how to create a queue manager for the version(s) of the WebSphere MQ software being used..

Here is an example for MQUSER mqm and queue manager TEST.QM.

node1:/var/mqm/qmgrs # su – mqm
mqm@node1:~> crtmqm TEST.QM
WebSphere MQ queue manager created.
Creating or replacing default objects for TEST.QM.
Default objects statistics : 31 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.

Note: If you want to protect an already existing queue manager, use the following steps to move the queue manager data to the shared storage:

a. Stop the queue manager (endmqm -i QUEUE.MGR.NAME).

b. Copy the content of the queue manager directory and the queue manager log directory to the shared storage created in Step 10.

c. Change the global configuration file (mqs.ini) and queue manager configuration file (qm.ini) as required to reflect the new location of the QMDIR and the QMLOGDIR.

d. Start the queue manager to verify its function (strmqm QUEUE.MGR.NAME).

e. Stop the queue manager (endmqm -i QUEUE.MGR.NAME).

  1. Optional: Configure a virtual IP resource in LifeKeeper on the primary server.

Follow the steps and guidelines described in the LifeKeeper for Linux IP Recovery Kit Administration Guide and the LifeKeeper Installation Guide.

Note: If your queue manager is only accessed by server connects, you do not have to configure the LifeKeeper virtual IP.

  1. Modify the listener object to reflect your TCP IP address and port:

su – MQUSER
runmqsc QUEUE.MANAGER.NAME

alter LISTENER(SYSTEM.DEFAULT.LISTENER.TCP) TRPTYPE(TCP) PORT(1414) IPADDR(192.168.1.100)

Note: Use the same IP address used in the Step 13 to set the value for IPADDR. Do not set IPADDR to have WebSphere MQ bind to all addresses.

  1. Start the queue manager on the primary server.

On the primary server, start the queue manager, the command server if it is configured to be started manually and the listener:

su – MQUSER
strmqm QUEUE.MANAGER.NAME
strmqcsv QUEUE.MANAGER.NAME
runmqlsr –m QUEUE.MANAGER.NAME –t TCP &

  1. Verify that the queue manager has been started successfully:

su – MQUSER
echo ‘display qlocal()*’ | runmqsc QUEUE.MANAGER.NAME

  1. Add the queue manager stanza to the global queue manager configuration file mqs.ini on the backup server.

Note: This step is required for file system layouts 2 and 3.

  1. Optional: Create the LifeKeeper test queue on the primary server.

runmqsc TEST.QM

5724-B41 © Copyright IBM Corp. 1994, 2002. ALL RIGHTS RESERVED.

Starting MQSC for queue manager TEST.QM.

define qlocal(LIFEKEEPER.TESTQUEUE) defpsist(yes) descr(‘LifeKeeper test queue’)

1 : define qlocal(LIFEKEEPER.TESTQUEUE) defpsist(yes) descr(‘LifeKeeper test queue’)

AMQ8006: WebSphere MQ queue created.

  1. If you want to have LifeKeeper start the command server, disable the automatic command server startup using the following command on the primary server. Otherwise, the startup of the command server will be performed automatically when the Queue Manager is started:

su – MQUSER
runmqsc TEST.QM
ALTER QMGR SCMDSERV(MANUAL)

  1. Create queue manager resource hierarchy on the primary server.

See section LifeKeeper Configuration Tasks for details.

  1. Extend queue manager resource hierarchy to the backup system.

See section LifeKeeper Configuration Tasks for details.

  1. Test your configuration.

To test your HA WebSphere MQ installation, follow the steps described in Testing a WebSphere MQ Resource Hierarchy.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment