The Simple Network Management Protocol (SNMP) defines a device-independent framework for managing networks. Devices on the network are described by MIB (Management Information Base) variables that are supplied by the vendor of the device. An SNMP agent runs on each node of the network, and interacts with a Network Manager node. The Network Manager can query the agent to get or set the values of its MIB variables, there by monitoring or controlling the agent’s node. The agent can also asynchronously generate messages called traps to notify the manager of exceptional events. There are a number of applications available for monitoring and managing networks using the Simple Network Management Protocol (SNMP).

LifeKeeper has an event notification mechanism for registering applications that wish to be notified of specific events or alarms (see the sendevent(5) man page). LifeKeeper can be easily enabled to send SNMP trap notification of key LifeKeeper events to a third party network management console wishing to monitor LifeKeeper activity.

The remote management console receiving SNMP traps must first be configured through the administration software of that system; LifeKeeper provides no external SNMP configuration. The remote management server is typically located outside of the LifeKeeper cluster (i.e., it is not a LifeKeeper node).

LifeKeeper Events Table

The following table contains the list of LifeKeeper events and associated trap numbers. The entire Object ID (OID) consists of a prefix followed by a specific trap number in the following format:

prefix.0.specific trap number

The prefix is .1.3.6.1.4.1.7359, which expands to iso.org.dod.internet.private.enterprises.7359 in the MIB tree. (7359 is SteelEye’s [SIOS Technology] enterprise number, followed by 1 for LifeKeeper.) For example, the LifeKeeper Startup Complete event generates the OID: .1.3.6.1.4.1.7359.1.0.100.

LifeKeeper Event/Description Trap # Object ID

LifeKeeper Startup Complete

Sent from a node when LifeKeeper is started on that node

100 .1.3.6.1.4.1.7359.1.0.100

LifeKeeper Shutdown Initiated

Sent from a node beginning LifeKeeper shutdown

101 .1.3.6.1.4.1.7359.1.0.101

LifeKeeper Shutdown Complete

Sent from a node completing LifeKeeper shutdown

102 .1.3.6.1.4.1.7359.1.0.102

LifeKeeper Manual Switchover Initiated on Server

Sent from the node from which a manual switchover was requested

110 .1.3.6.1.4.1.7359.1.0.110

LifeKeeper Manual Switchover Complete – recovered list

Sent from the node where the manual switchover was completed

111 .1.3.6.1.4.1.7359.1.0.111

LifeKeeper Manual Switchover Complete – failed list

Sent from each node within the cluster where the manual switchover failed

112 .1.3.6.1.4.1.7359.1.0.112

LifeKeeper Node Failure Detected for Server

Sent from each node within the cluster when a node in that cluster fails

120 .1.3.6.1.4.1.7359.1.0.120

LifeKeeper Node Recovery Complete for Server – recovered list

Sent from each node within the cluster that has recovered resources from the failed node

121 .1.3.6.1.4.1.7359.1.0.121

LifeKeeper Node Recovery Complete for Server – failed list

Sent from each node within the cluster that has failed to recover resources from the failed node

122 .1.3.6.1.4.1.7359.1.0.122

LifeKeeper Resource Recovery Initiated

Sent from a node recovering a resource; a 131 or 132 trap always follows to indicate whether the recovery was completed or failed.

130 .1.3.6.1.4.1.7359.1.0.130

LifeKeeper Resource Recovery Failed

Sent from the node in trap 130 when the resource being recovered fails to come into service

131* .1.3.6.1.4.1.7359.1.0.131

LifeKeeper Resource Recovery Complete

Sent from the node in trap 130 when the recovery of the resource is completed

132 .1.3.6.1.4.1.7359.1.0.132

LifeKeeper Communications Path Up

A communications path to a node has become operational

140 .1.3.6.1.4.1.7359.1.0.140

LifeKeeper Communications Path Down

A communications path to a node has gone down

141 .1.3.6.1.4.1.7359.1.0.141
LifeKeeper <Node Monitoring> Failure

Sent from a node where a failure was detected with Node Monitoring of the Standby Node Health Check. Detected failure is described in <Node Monitoring>.

190 .1.3.6.1.4.1.7359.1.0.190
LifeKeeper <OSUquickCheck> Failure

Sent from a node where a failure was detected with OSU Resource Monitoring of the Standby Node Health Check. Tag name of the resource where the failure was detected is described in <OSUquickCheck>.

200 .1.3.6.1.4.1.7359.1.0.200
The following variables are used to “carry” additional information in the trap PDU:
Trap message all .1.3.6.1.4.1.7359.1.1
Resource Tag 130 .1.3.6.1.4.1.7359.1.2
Resource Tag 131 .1.3.6.1.4.1.7359.1.2
Resource Tag 132 .1.3.6.1.4.1.7359.1.2
List of recovered resources 111 .1.3.6.1.4.1.7359.1.3
List of recovered resources 121 .1.3.6.1.4.1.7359.1.3
List of failed resources 112 .1.3.6.1.4.1.7359.1.4
List of failed resources 122 .1.3.6.1.4.1.7359.1.4
  • This trap may appear multiple times if recovery fails on multiple backup servers.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment