Bringing an NFS Resource In‑Service (Restore)
106007 | Cannot bring NFS or HANFS resource “TAG” in service on server "SERVER" | Review other error messages to determine the action to take. After correcting the problem, try bringing the resource in service manually. |
106010 | NFS is not running on server “SERVER”. LifeKeeper will attempt to restart NFS. | This message is for information only. LifeKeeper will try to restart the NFS daemons automatically. If LifeKeeper encounters problems while restarting one of the daemons, you will receive a message that starting NFS failed. |
106011 | Starting NFS on server "SERVER" failed | LifeKeeper encountered a problem while restarting the NFS daemons. Try manually restarting NFS. |
106012 | The export point "EXPORT POINT" is not exported on server "SERVER". LifeKeeper will attempt to export the entry. | LifeKeeper has detected that the export point is no longer exported, and will try to export it. |
106013 | Unable to export "EXPORT POINT" on server "SERVER" | Try manually exporting the file system. |
106014 | Usage: USAGE STRING | Usage of command run with incorrect arguments. Command line only. |
106019 | Executing command: "COMMAND" | This message is displayed when LifeKeeper restarts an NFS daemon or exports/unexports an export point. It provides additional information that can be useful if there is a problem. |
106024 | Unable to stop and restart rpc.mountd on "SERVER!" | During a hierarchy restore the rpc.mountd daemon process needed to be restarted and this process failed. Manually attempt to stop and restart the process to determine the error and the action to take. |
106027 | Open of "ABC" on server "SERVER" failed: "File not found" | The attempted open failed for the reason listed. |
106028 | Mount of /proc/fs/nfsd failed on server "SERVER!" | In 2.6 and later kernels /proc/fs/nfsd is used for client authentication and an attempt to mount it failed. Manually attempt to mount /proc/fs/nfsd to determine the failure. |
106029 | Unable to get exclusive lock on "/var/lib/nfs/rmtab" on server "SERVER" | Unable to place an exclusive lock on /var/lib/nfs/rmtab for update after 20 seconds indicating a problem with the file. |
106030 | Unable to restore client info for "123.45.678.90" on server "SERVER!": "ABC" | Attempts to failover client 123.45.678.90 locks to server SERVER1 failed for the listed reason. Correct the failure condition and attempt to restore the hierarchy again. |
106037 | Attempts to get exclusive lock on "/var/lib/nfs/rmtab" on server "SEVER" failed: "ABC" | Unable to place an exclusive lock on /var/lib/nfs/rmtab for updating. See the error message for the cause. |
106039 | Open of "FILE" on server "SERVER" failed: "ABC" or Attempt to get exclusive lock on "FILE" on server "SERVER" failed: "ABC" | An attempt to open or obtain an exclusive lock on a file has failed. See the error message for the cause. |
106040 | Multiple virtual IP addresses detected. In this release NFS lock failover only supports one virtual IP address. | Recreate the NFS resource hierarchies to use only one virtual IP address or set FAILOVERNFSLOCKS to false in the LifeKeeper defaults file. |
106052 | Unable to mount rpc_pipefs on "SERVER". Reason: "REASON". | rpc_pipefs was not mounted on SERVER and the mount attempt failed for REASON. |
106053 | rpc_pipefs successfully mounted on "SERVER" | rpc_pipefs was successfully mounted on SERVER. |
Taking an NFS Resource Out of Service (Remove)
106008 | Unable to unexport the export point "EXPORT POINT" on server "SERVER" | Use the exportfs(8) command to unexport it. |
106014 | Usage: USAGE STRING | Usage of command run with incorrect arguments. Command line only. |
106019 | Executing command: "COMMAND" | This message is displayed when LifeKeeper restarts an NFS daemon or exports/unexports an export point. It provides additional information that can be useful if there is a problem. |
106046 | Unable to umount child bind directory "BIND MOUNT POINT" for export "EXPORT POINT" on "SERVER". | The attempt to umount the bind mount “BIND MOUNT POINT” for EXPORT POINT on SERVER failed. |
Bringing an NFS Resource Back In Service (Recover)
The LifeKeeper core periodically checks the health of every NFS instance In Service on the local server by running an NFS “quickCheck” script. This script verifies the following:
- The file system is exported
- The NFS/HA-NFS daemons are running
If the instance is not fully functional, a "recover" script is invoked to attempt to restart the instance. This simply logs an error message, invokes "restore," prints the final error or success message – depending on error or success of the "restore" script – and returns the same result as "restore." If restore/recover fails, this instance is failed over to another server.
Post your comment on this topic.