Use Control F to search for a specific error code in each catalog. To search for any error code, select the Search button at the top right of the screen.
Code | Severity | Message | Cause/Action |
---|---|---|---|
000200 | ERROR | pam_start() failed | Cause: PAM service failed to start. |
000201 | ERROR | pam_authenticate failed (user %s, retval %d | Cause: Password Was not compatible / not successful Action: Check credentials for pam_authenticate |
000202 | ERROR | pam_end() failed?!?! | Cause: PAM_SUCCESS is not true Action: Start the PAM service via pam_start() |
000203 | ERROR | Did not find expected group ‘lkguest’ | Cause: lkguest user group was not found. Action: Check for user group on the system, reinstall Lifekeeper. Check /var/log/security and /var/log/messages for more information. |
000204 | ERROR | Did not find expected group ‘lkoper’ | Cause: lkoper user group was not found. Action: Check for user group on the system, reinstall Lifekeeper. Check /var/log/security and /var/log/messages for more information. |
000205 | ERROR | Did not find expected group ‘lkadmin’ | Cause: lkadmin user group was not found. Action: Check for user group "lkoper". Reinstall LifeKeeper. See /var/loge/lifekeeper.log for troubleshooting |
000208 | ERROR | pam_setcred establish credentials failed (user %s, retval %d | Cause: Unable to establish valid login credentials for user {user}. The pam_setcred call returned: {retval}. Action: Check /var/log/security and /var/log/messages for more information. |
000209 | ERROR | pam_setcred delete credentials failed (user %s, retval %d | Cause: Unable to clear login credentials for user {user}. The pam_setcred call returned: {retval}. Action: Check /var/log/security and /var/log/messages for more information. |
000902 | ERROR | Error removing system name from loopback address line in /etc/hosts file. You must do this manually before starting the GUI server. | Cause: System name did not get removed from /etc/hosts file. Action: Remove system name manually then restart the GUI server, then enter the following: |
000918 | ERROR | LifeKeeper GUI Server error during Startup | Cause: The GUI server terminated due to an abnormal condition. Action: Check the logs for related errors and try to resolve the reported problem. |
001052 | FATAL | Template resource "%s" on server "%s" does not exist | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
001053 | ERROR | Cannot access canextend script "%s" on server "%s" | Cause: LifeKeeper was unable to run pre-extend checks because it was unable to find the script CANEXTEND on {server}. Action: Check your LifeKeeper configuration. |
001054 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: LifeKeeper was unable to extend the resource {resource} on {server}. |
001055 | ERROR | Cannot access extend script "%s" on server "%s" | Cause: LifeKeeper was unable to extend the resource hierarchy because it was unable to the find the script EXTEND on {server}. Action: Check your LifeKeeper configuration. |
001057 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: LifeKeeper was unable to extend the resource {resource} on {server}. |
001059 | ERROR | Resource with tag "%s" already exists | Cause: The name provided for a resource is already in use. Action: Either choose a different name for the resource, or use the existing resource. |
001060 | ERROR | Resource with either matching tag "%s" or id "%s" already exists on server "%s" for App "%s" and Type "%s" | Cause: The name or id provided for a resource is already in use. Action: Either choose a different name or id for the resource or use the existing resource. |
001061 | ERROR | Error creating resource "%s" on server "%s" | Cause: An unexpected failure occurred while creating a resource. Action: Check the logs for related errors and try to resolve the reported problem. |
001081 | WARN | IP address \"$ip\" is neither v4 nor v6 | Cause: The IP address provided is neither an IPv4 nor an IPv6 address. Action: Please check the name or address provided and try again. |
004024 | ERROR | Cause: LCD failed to fetch resource information for resource id {id} during resource recovery. Action: Verify the input resource id and retry the recovery operation. |
|
004028 | ERROR | %s occurred to resource \"%s\" | Cause: Local recovery failed for resource {resource}. Action: Check the logs for related errors and try to resolve the reported problem. |
004055 | ERROR | attempt to remote-remove resource \"%s\" that can’t be found | Cause: Remotely removing a resource from service failed while attempting to find the resource by tag name {tag}. Action: Check input tag name and retry the recovery operation. |
004056 | ERROR | attempt to remote-remove resource \"%s\" that is not a shared resource | Cause: Remotely removing a resource from service failed given that the tag name {tag} is not a shared resource. Action: Check input tag name and retry the recovery operation. |
004060 | ERROR | attempt to transfer-restore resource \"%s\" that can’t be found | Cause: Remote transfer of an in service resource failed given tag name {tag}. Action: Check input tag name and retry the recovery operation. |
004061 | ERROR | attempt to transfer-restore resource \"%s\" that is not a shared resource with machine \"%s\" | Cause: LifeKeeper failed to find a shared resource by {tag} name during remote transfer of a resource in service from a remote {machine}. Action: Check input tag name and retry the recovery operation. |
004089 | ERROR | ERROR: Parallel recovery initialization failed.\n | Cause: Parallel recovery failed to initialize the list of resources in the hierarchy. Action: Check the logs for related errors and try to resolve the reported problem. |
004091 | ERROR | ERROR: fork failed. continuing to next resource\n | Cause: Parallel recovery failed to fork a new process attempting to restore a single resource. |
004093 | ERROR | ERROR: reserve failed. continuing to next resource\n | Cause: Parallel recovery failed to reserve a single resource from the collective hierarchy. Action: Check the logs for related errors and try to resolve the reported problem. |
004096 | ERROR | ERROR: clone %d is hung, attempting to kill it\n | Cause: A single sub process of a resource recovery is hung during parallel recovery of a whole resource hierarchy. Action: A kill of the hanging sub process will be executed automatically. |
004097 | ERROR | ERROR: Could not kill clone %d\n | Cause: Failed to kill the hung sub process. |
004116 | ERROR | %s | Cause: Writing an on-disk version of an in-memory data object failed attempting to create the intermediate folder. This is a system error. Action: Check log for the error information in detail and determine why the intermediate folder is not created. |
004117 | ERROR | open(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to open a temporary file. This is a system error. Action: Check log for the error information in detail and determine why the file is not successfully opened. |
004118 | ERROR | write(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to write to a temporary file. This is a system error. Action: Check log for the error information in detail and determine why the file writing is failed. |
004119 | ERROR | fsync(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to fsync a temporary file. This is a system error. Action: Check log for the error information in detail and determine why the "fsync" is failed. |
004120 | ERROR | close(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to close a temporary file. This is a system error. Action: Check log for the error information in detail and determine why the file closing is failed. |
004121 | ERROR | rename(%s, %s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to rename a temporary file {file} to original file {file}. This is a system error. Action: Check log for the error information in detail and determine why the file renaming is failed. |
004122 | ERROR | open(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to open an intermediate directory {directory}. This is a system error." Action: Check log for the error information in detail and determine why the directory open is failed. |
004123 | ERROR | fsync(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to fsync an intermediate directory {directory}. This is a system error. Action: Check log for the error information in detail and determine why the directory "fsync" is failed. |
004124 | ERROR | close(%s | Cause: Writing an on-disk version of an in-memory data object failed while attempting to close an intermediate directory {directory}. This is a system error. Action: Check log for the error information in detail and determine why the directory close is failed. |
004125 | ERROR | wrote only %d bytes of requested %d\n | Cause: Writing an on-disk version of an in-memory data object failed as the final size {size} bytes of written data is less than the requested number {number} of bytes. Action: Check log for the related error information in detail and determine why the data writing is failed. |
004126 | ERROR | open(%s | Cause: Attempting to open a data file failed during the reading of an on-disk version of the data object into the buffer. This is a system error. Action: Check log for the error information in detail and determine why the file open is failed. |
004127 | ERROR | open(%s | Cause: Attempting to open a temporary data file failed during the reading of an on-disk version of the data object into the buffer. This is a system error. Action: Check log for the error information in detail and determine why the file open is failed. |
004128 | ERROR | read(%s | Cause: Reading a data file failed during the loading of an on-disk version of the data object into the buffer. This is a system error. Action: Check log for the error information in detail and determine why the file reading is failed. |
004129 | ERROR | read buffer overflow (MAX=%d)\n | Cause: The read buffer limit {max} was reached while attempting to read an on-disk version of the data object into the buffer. Action: Check the LifeKeeper configuration and restart the LifeKeeper. |
004130 | ERROR | close(%s | Cause: Failed to close a data file during reading an on-disk version of the data object into the buffer. This is a system error. Action: Check log for the error information in detail and determine why the file close is failed. |
004131 | ERROR | rename(%s, %s | Cause: Failed to rename a temporary data file during reading an on-disk version of the data object into the buffer. This is a system error. Action: Check log for the error information in detail and determine why the file renaming is failed. |
004132 | ERROR | Can’t open %s : %s | Cause: Failed to open a directory {directory} with error {error} during reading an on-disk version of the application and resource type information into the buffer. This is a system error. Action: Check log for the error information in detail and determine why the open directory is failed. |
004133 | ERROR | path argument may not be NULL | Cause: The command "lcdrcp" failed during a file copy because the input source path is missing. Action: Check the input source path and retry "lcdrcp". |
004134 | ERROR | destination path argument may not be NULL | Cause: The "lcdrcp" command failed during a file copy because the input destination path is missing. Action: Check the input destination path and retry "lcdrcp". |
004135 | ERROR | destination path can’t be zero length string | Cause: Input destination path was empty during file copy when using "lcdrcp". Action: Check the input destination path and retry "lcdrcp". |
004136 | ERROR | open(%s | Cause: Failed to open source file path during file copy using "lcdrcp". This is a system error. Action: Check the existence/availability of the input source path and retry "lcdrcp". Also check the related log for error information in detail. |
004137 | ERROR | fstat(%s | Cause: Failed to fetch file attributes using "fstat" during file copy by "lcdrcp". This is a system error. Action: Check the log for error information in detail. |
004138 | ERROR | file \"%s\" is not an ordinary file (mode=0%o | Cause: Detected source file as a none ordinary file during file copy using "lcdrcp". Action: Check the input source file path and retry "lcdrcp". |
004151 | FATAL | lcdMalloc failure | Cause: Failed to allocate memory with requested size in shared memory. Action: A fatal error will be produced. |
004152 | ERROR | having \"%s\" depend on \"%s\" would produce a loop | Cause: Adding the requested dependency would produce a loop of dependent relationship. Action: Correct the requested dependency and retry the dependency creation. |
004164 | ERROR | Priority mismatch between resources %s and %s. Dependency creation failed. | Cause: The priorities for {resource1} and {resource2} do not match. Action: Resource priorities must match. Change one or both priorities to the same value and retry creating the dependency. |
004176 | ERROR | %s | Cause: The command "doabort" failed to create the {directory} for writing the core file. This is a system error. Action: Check the logs for related errors and try to resolve the reported problem. |
004182 | ERROR | received signal %d\n | Cause: Received signal {signum}. |
004186 | ERROR | %s: ::receive(%d) protocol error on incoming_mailbox %s | Cause: In function {function}, attempting to receive message within timeout {timeout} seconds failed due to a non-idle status of incoming mailbox {mailbox}. Action: Check the status of the connections inside the cluster and retry the process. |
004190 | ERROR | %s: ::receive(%d) did not receive message within %d seconds on incoming_mailbox %s | Cause: In function {function}, attempting to receive message within timeout {timeout} seconds failed with incoming mailbox {mailbox}. Action: Check the status of the connections inside the cluster and retry the process. |
004204 | ERROR | attempt to send illegal message | Cause: Sending message failed due to a illegal message. |
004205 | ERROR | destination system \"%s\" is unknown | Cause: Sending message failed due to an unknown destination system name {system}. Action: Check the configuration and status of the system and check the logs for related errors. Retry the same process after the system is full initialized. |
004206 | ERROR | destination mailbox \"%s\" at system \"%s\" is unknown | Cause: Sending message failed due to an unknown mailbox {mailbox} on destination system name {system}. This error may be caused by sending a message before the LCD is fully initialized. Action: Check the configuration and status of the system and check the logs for related errors. Retry the same process after the system is full initialized. |
004208 | ERROR | destination system \"%s\" is alive but the \"%s\" mailbox process is not listening. | Cause: Sending message failed. The network connection to destination system {system} is alive but the contact to the destination mailbox is lost. Action: Check the configuration and status of the system and check the logs for related errors. Retry the same process after the system is full initialized. |
004209 | ERROR | destination system \"%s\" is dead. | Cause: Sending message failed due to losing connection with the destination system {system}. Action: Check the configuration and status of the system and check the logs for related errors. Retry the same process after the system is full initialized. |
004211 | ERROR | can’t send to destination \"%s\" error=%d | Cause: Sending message to destination system {system} failed due to internal error {error}. Action: Check the logs for related errors and try to resolve the reported problem. |
004217 | ERROR | destination system \"%s\" is out of service. | Cause: Sending message failed due to losing connection with the destination system {system}. Action: Check the configuration and status of the system and check the logs for related errors. Retry the same process after the system is full initialized. |
004221 | ERROR | destination system \"%s\" went out of service. | Cause: Sending message failed due to losing connection with the destination system {system}. Action: Check the configuration and status of the system and check the logs for related errors. Retry the same process after the system is full initialized. |
004228 | ERROR | Can’t get host name from getaddrinfo( | Cause: Creating network object failed due to a failure when getting host name using "getaddrinfo()". Action: Check the configuration and status of the system. Do the same process again. |
004234 | ERROR | IP address pair %s already in use | Cause: Creating network object failed due to the IP address pair {pair} is already used for a TCP communication path. Action: Check the input IP address pair and do the network creation again. |
004258 | WARN | Communication to %s by %s FAILED | Cause: Communication to system {system} by communication path {path} failed. Action: Check system configuration and network connection. |
004261 | WARN | COMMUNICATIONS failover from system \"%s\" will be started. | Cause: A failover from system {system} will be started due to all the communications path are down. Action: Check system configuration and network connection status. Confirm the system status when failover is done. |
004292 | ERROR | resource \"%s\" %s | Cause: A resource could not be brought in service because its current state is unknown. Action: Check the logs for related errors and try to resolve the reported problem. |
004293 | ERROR | resource \"%s\" %s | Cause: A resource could not be brought into service because its current state disallows it. Action: Check the logs for related errors and try to resolve the reported problem. |
004294 | ERROR | resource \"%s\" requires a license (for Kit %s/%s) but none is installed | Cause: The resource’s related recovery kit requires a license. Action: Install a license for the recovery kit on the server where the resource was to be brought into service. |
004297 | ERROR | secondary remote resource \"%s\" on machine \"%s\" is already in-service, so resource \"%s\" on machine \"%s\" can’t be brought in-service. | Cause: A resource {resource} could not be brought into service on machine {machine} because its secondary remote resource {resource} is already in service on machine {machine}. Action: Manually change the remote resource out of service and do in service on local resource again. |
004298 | ERROR | remote resource \"%s\" on machine \"%s\" is still in-service, restore of resource \"%s\" will not be attempted!\n | Cause: The restore of the resource is blocked due to the resource being in-service on another machine. Action: Take the resource out of service on the specified machine. |
004300 | ERROR | restore of resource \"%s\" has failed | Cause: A resource could not be brought into service. Action: Check the logs for related errors and try to resolve the reported problem. |
004311 | ERROR | can’t perform \"remove\" action on resources in state \"%s\" | Cause: A resource could not be put out of service due to the current state being {state}. Action: Check the logs for related errors and try to resolve the reported problem. |
004313 | ERROR | remove of resource \"%s\" has failed | Cause: A resource {resource} could not be put out of service. Action: Check the logs for related errors and try to resolve the reported problem. |
004318 | ERROR | %s,priv_globact(%d,%s): script %s FAILED returning %d | Cause: A global action script failed with the specified error code. Action: Check the logs for related errors and try to resolve the reported problem. |
004332 | ERROR | action \"%s\" has failed on resource \"%s\" | Cause: A resource action failed. Action: Check the logs for related errors and try to resolve the reported problem. |
004351 | ERROR | a \"%s\" equivalency must have one remote resource | Cause: Creating of a {eqvtype} equivalency failed due to the two input tag names exist on the same system. Action: Correct the inputs resource tag names and do the same process again. |
004356 | WARN | Use unsupported option %s for remove. This option may be removed in future upgrades. | Cause: When performing restore task, an invalid option was passed. Action: Check the options provide for validity. |
004376 | FATAL | wait period of %u seconds for LCM to become available has been exceeded (lock file \"%s\" not removed | Cause: The LCM daemon did not become available within a reasonable time and the LCD cannot operate without the LCM. Action: Check the logs for related errors and try to resolve the reported problem. |
004386 | ERROR | initlcdMalloc;shmget | Cause: A shared memory segment could not be initialized. Action: Check the logs for related errors and try to resolve the reported problem. Review the product documentation and ensure that the server meets the minimum requirements and that the operating system is configured properly. |
004439 | WARN | intermachine recovery skipped for %s. Failed to obtain resource_state_change lock.\n | Cause: Local Recovery action failed to get the recourse state_change lock Action: /opt/LifeKeeper/lkadm/bin/getlocks failed to retrieve anything meaningful. Check /var/log/lifekeeper.log for troubleshooting. |
004444 | WARN | License key (for Kit %s/%s) has EXPIRED | Cause: Your license has expired. Action: Contact Support to obtain a new license. |
004445 | WARN | License key (for Kit %s/%s) will expire at midnight in %ld days | Cause: Your license is about to expire. Action: Contact Support to obtain a new license. |
004466 | ERROR | system \"%s\" not defined on machine \"%s\". | Cause: The specified system name is not known. Action: Verify the system name, and try the operation again. |
004467 | ERROR | system \"%s\" unknown on machine \"%s\" | Cause: The specified system name is not recognized. Action: Verify the system name, and try the operation again. |
004494 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004495 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004496 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004497 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004498 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004499 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004500 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004501 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004502 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004503 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004504 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004505 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004506 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004507 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004508 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004509 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004510 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004511 | ERROR | COMMAND OUTPUT: %s | Cause: An action or event script produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
004512 | ERROR | Cause: An error occured on the remote machine. Action: Check the logs on the remote machine for additional details. |
|
004565 | ERROR | can’t set resource state type to ILLSTATE | Cause: An attempt was made to put a resource in an illegal state. Action: Do not try to put a resource into an illegal state. |
004567 | ERROR | split brain detected while setting resource \"%s\" to \"%s\" state (SHARED equivalency to resource \"%s\" on machine \"%s\" which is in state \"%s\"). Setting local resource ISP but aborting the operation. | Cause: Changing resource {resource} to state {state} failed since its SHARED equivalent resource {resource} on machine {machine} is in state {state}. Action: A split brain situation has occurred. The failover operation has been aborted. You should manually put the split brain resources (ISP on multiple systems) in the proper state. |
004575 | ERROR | COMMAND OUTPUT: %s | Cause: add_driver call Action: Check /var/log/lifekeeper.log for troubleshooting. |
004607 | ERROR | no resource instance has tag \"%s\" | Cause: No resource with the provided tag exists. Action: Provide a valid tag, or check the logs for related errors and try to resolve the reported problem. |
004608 | ERROR | no resource instance has identifier \"%s\" | Cause: No resource exists with the provided identifier. Action: Provide a valid identifier, or check the logs for related errors and try to resolve the reported problem. |
004619 | ERROR | resource with tag \"%s\" already exists with identifier \"%s\" | Cause: The provided tag name already exists. Action: Choose a different tag name. |
004620 | ERROR | resource with identifier \"%s\" already exists with tag \"%s\" | Cause: The provided identifier already exists. Action: Choose a different identifier to use for this resource. |
004643 | ERROR | Instance tag name is too long. It must be shorter than %d characters. | Cause: Tag name is too long. Action: Provide a tag name that is less than 256 characters. |
004646 | ERROR | Tag name contains illegal characters | Cause: Tag name contains an illegal character. Action: Specify a tag name that does not include one of these characters: _-./ |
004691 | ERROR | can’t set both tag and identifier at same time | Cause: Both a tag and an identifier were specified. Action: Provide only one of tag or identifier. |
004745 | ERROR | failed to access lkexterrlog path=%s | Cause: The utility "lkexterrlog" can not be accessed for collecting system information. Action: Check the installation of package "steeleye-lk" and make sure the utility "lkexterrlog" is accessible. |
004746 | ERROR | lkexterrlog failed runret=%d cmdline=%s | Cause: The execution of utility "lkexterrlog" failed when collecting system information. Action: Check the logs for related errors and try to resolve the reported problem. |
004782 | ERROR | Resource \"%s\" was in state \"%s\" before event occurred – recovery will not be attempted | Cause: The resource is already in service. Recovery will not be attempted. |
004783 | ERROR | Resource \"%s\" was already in state \"%s\" before event occurred | Cause: A resource was not in an appropriate state to allow recovery. Action: Put the resource in the ISP state if recovery is still needed. |
004786 | ERROR | %s on failing resource \"%s\" | Cause: An error occured why attempting to recover a resource. Action: Check the logs for related errors and try to resolve the reported problem. |
004788 | EMERG | failed to remove resource ‘%s’. SYSTEM HALTED. | Cause: A error occur that prevented a resource from being taken out of service during a recovery. The system has been restarted to ensure the resource is not active on two systems. Action: Check the logs for related errors and try to resolve the reported problem. |
004793 | ERROR | lcdsendremote transfer resource \"%s\" to \"%s\" on machine \"%s\" failed (rt=%d | Cause: A failure occured while transfering a resource and it’s dependencies to another system. Action: Check the logs for related errors and try to resolve the reported problem. Check the log on the other system for related errors. |
004797 | ERROR | Restore of SHARED resource \"%s\" has failed | Cause: There was an error while restoring a resource. Action: Check the logs for related errors and try to resolve the reported problem. |
004806 | ERROR | Restore in parallel of resource \"%s\" has failed; will re-try serially | Cause: Parallel recovery failed. Check the logs for related errors and try to resolve the reported problem. Action: No action is required. The system will continue to recover serially. If the recovery fails, check for error messages related to the resources which failed to recover to find out what further actions to take. |
004819 | ERROR | read_temporal_recovery_log(): failed to fopen file: %s. fopen() %s. | Cause: The opening of the temporal recovery log file {file} in preparation for loading into it into memory failed with the error {error}. Action: Check system log files and correct any reported errors before retying the operation. |
004820 | ERROR | read_temporal_recovery_log(): failed to malloc initial buf for temporal_recovery_stamp. | Cause: Loading the temporal recovery log information into memory failed when attempting to acquire memory to store the log information. Action: Check system log files and correct any reported errors before retying the operation. |
004821 | ERROR | read_temporal_recovery_log(): failed to reallocate buffer for temporal_recovery_stamp. | Cause: Loading the temporal recovery log information into memory failed when attempting to increase the amount of memory required to store the log information. Action: Check system log files and correct any reported errors before retrying the operation. |
004822 | ERROR | write_temporal_recovery_log(): failed to open file: %s. | Cause: The update of the temporal recovery log file was terminated when the open of the temporary file {temporary name} failed. Action: Check system log files and correct any reported errors before retrying the operation. |
004823 | ERROR | rename(%s, %s) failed. | Cause: The update of the temporal recovery log file was terminated when the rename of the temporary file {temporary name} to the real log file {real name} failed. Action: Check system log files and correct any reported errors before retrying the operation. |
004827 | ERROR | b | Cause: This is a fatal error message that includes a list of locks for which an attempt to obtain has failed. The message will include the list of current locks and attempted locks. Action: Review the list of locks to determine the failure. |
004829 | FATAL | err=%s line=%d Semid=%d numops=%zd perror=%s | Cause: The modification of semaphore ID {semaphore} failed with error {err} and error message description {perror}. Action: Check adjacent log messages for more details. Also, check the system log files and correct any reported errors before retrying the operation. |
004860 | ERROR | restore ftok failed for resource %s with path %s | Cause: The attempt to generate an IPC key for use in semaphore operations for resource {tag} using path {path} failed. This is a system error. Action: Check adjacent log messages for more details. Also check system log files and correct any reported errors before retrying the operation. |
004861 | ERROR | semget failed with error %d | Cause: The attempt to retrieve the semaphore identification associated with the instances files has failed. This is a system error. Action: Check adjacent log messages for more details. Also check system log files and correct any reported errors before retying the operation. |
004862 | ERROR | semctl SEMSET failed with error %d | Cause: The attempt to create and initialize a semaphore used during the recovery process has failed with the error {error number}. This is a system error. Action: Check adjacent log messages for more details. Also, check system log files and correct any reported errors before retying the operation. |
004863 | ERROR | semop failed with error %d | Cause: The attempt to set a semaphore used during the recovery process has failed with the error {error number}. This is a system error. Action: Check adjacent log messages for more details. Also, check system log files and correct any reported errors before retying the operation. |
004864 | ERROR | semctl SEMSET failed with error %d | Cause: The attempt to release a semaphore used during the recovery process has failed with the error {error number}. This is a system error. Action: Check adjacent log messages for more details. Also, check system log files and correct any reported errors before retying the operation. |
004865 | ERROR | restore action failed for resource %s (exit: %d | Cause: The attempt to bring resource {tag} In Service has failed. Action: Check adjacent log messages for more details. Correct any reported errors and retry the operation. |
004872 | ERROR | Remote remove of resource \"%s\" on machine \"%s\" failed (rt=%d | Cause: The request to take resource {tag} Out of Service on {server} for transfer to the local system has failed. Action: Check adjacent log messages for more details on the local system. Also, check the log messages on {server} for further details on the failure to remove the resource. |
004875 | ERROR | remote remove of resource \"%s\" on machine \"%s\" failed | Cause: The request to take resource {tag} Out of Service on {server} for transfer to the local system has failed. Action: Check adjacent log messages for more details on the local system. Also, check the log messages on {server} for further details on the failure to remove the resource. |
004876 | ERROR | remote remove of resource \"%s\" on machine \"%s\" failed | Cause: The request to take resource {tag} Out of Service on {server} for transfer to the local system has failed. Action: Check adjacent log messages for more details on the local system. Also, check the log messages on {server} for further details on the failure to remove the resource. |
005045 | ERROR | tli_fdget_i::execute unable to establish a listener port | Cause: A network connection could not be properly configured. Action: Verify that all network hardware and drivers are properly configured. If this message continues and resources cannot be put into service, contact Support. |
005055 | ERROR | tli_fdget_o::execute – async connect failure | Cause: A network connection could not be properly configured. Action: Verify that all network hardware and drivers are properly configured. If this message continues and resources cannot be put into service, contact Support. |
005061 | ERROR | tli_fdget_o::execute – bind socket | Cause: A network connection could not be properly configured. Action: Verify that all network hardware and drivers are properly configured. If this message continues and resources cannot be put into service, contact Support. |
005090 | WARN | system_driver::add_driver: cmd=%s\n | Cause: Add driver call Action: Check /var/log/lifekeeper.log for troubleshooting. |
005108 | WARN | system_driver::rm_driver: cmd=%s\n | Cause: rm driver call from definition of LCM_TRACE Action: Check /var/log/lifekeeper.log for troubleshooting. |
005145 | ERROR | opening the file | Cause: A pipe could not be opened or created. Action: Check adjacent log messages for more details. |
005164 | ERROR | tli_handler::handle-error:sending/receiving data message | Cause: A message failed to be sent or received. Action: Check adjacent log messages for more details. This may be a temporary error but if this error continues and servers can’t communicate then verify the network configuration on the servers. |
005165 | WARN | errno %d\n | Cause: A message failed to be sent or received. Action: Check adjacent log messages for more details. This may be a temporary error but if this error continues and servers can’t communicate then verify the network configuration on the servers. |
005166 | WARN | poll 0x%hx\n | Cause: A message failed to be sent or received. Action: Check adjacent log messages for more details. This may be a temporary error but if this error continues and servers can’t communicate then verify the network configuration on the servers. |
005167 | WARN | handler for sys %s\n | Cause: A message failed to be sent or received. Action: Check adjacent log messages for more details. This may be a temporary error but if this error continues and servers can’t communicate then verify the network configuration on the servers. |
005225 | WARN | so_driver::handle_error: sending/receiving data message errno %d: %s | Cause: A message failed to be sent or received. Action: Check adjacent log messages for more details. This may be a temporary error, but if this error continues and servers cannot communicate, verify the network configuration on the servers. |
005235 | WARN | found tcp connection to iwstp\n | Cause: definition of LCM_TRACE > connection to iwstp Action: N/A |
005236 | WARN | didn’t find tcp connection to iwstp\n | Cause: definition of LCM_TRACE > could not connect to iwstp Action: Check /var/log/lifekeeper.log for troubleshooting. |
005237 | WARN | lcm_handler retry send from %s:%s to %s:%s (%d)\n | Cause: lcm_handler retry send Action: Check logs for lcm_handler related errors in /var/log/lifekeeper.log |
005238 | WARN | detected duplicate request from %s:%s to %s:%s\n | Cause: The instance of lcm_ctl was not null, retry send from reply_sys Action: Check /var/log/lifekeeper.log for troubleshooting. |
005239 | WARN | lcm_handler retry timer set to %d based on lcd remote timeout of %d s (%d s | Cause: The maximum retries were reached for the remote system. Action: Check /var/log/lifekeeper.log for troubleshooting. |
005240 | WARN | lcm_handler retry count/time (%d/%zu) has exceeded the maximum. Giving up…\n | Cause: retry limit reached (900) Action: Check /var/log/lifekeeper.log for troubleshooting. |
005241 | WARN | dup_list: %s %s %s %s %d %d (%d | |
005242 | WARN | clean up stale dup_list entry (%d s): %s:%s to %s:%s last: %s | Cause: Cleaning up the stale dup_list notification Action: N/A |
005243 | WARN | add new dup_list entry %s:%s to %s:%s %d %d (%d | Cause: Adding a new dup list entry Action: N/A |
005244 | WARN | closing fd %d\n | Cause: Shutdown fd Action: N/A |
005245 | WARN | openpoll fd %d\n | Cause: Initialize poll Action: N/A |
005246 | WARN | tli_fdget_o[%u] call getaddrinfo (local)\n | |
005247 | WARN | tli_fdget_o[%u] call getaddrinfo (remote)\n | |
005248 | WARN | LCMPORT is out of range (%d). | |
006012 | ERROR | quickCheck script ‘%s’ (%d) failed to exit after %lu seconds. Forcibly terminated. Please examine the script or adjust the LKCHECKINTERVAL parameter in %s. | Cause: A quickCheck script is probably taking too long or hanging. Action: Perform the steps listed in the message text. |
006014 | ERROR | LKCHECKINTERVAL parameter is too short. It is currently set to %ld seconds. It should be at least %ld seconds. Please adjust this parameter in %s and execute ‘kill %d’ to restart the lkcheck daemon. | Cause: lkcheck was not able to check all resources within the LKCHECKINTERVAL. Action: Check if any quickCheck is running longer than expected. This may indicate a problem with a resource. Increase the LKCHECKINTERVAL parameter if needed. |
006102 | ERROR | COMMAND OUTPUT: $LKROOT/bin/sendevent | Cause: This is output from a "sendevent "(event generator) command. Action: Check adjacent log messages for more details. |
006103 | ERROR | COMMAND OUTPUT: $LKROOT/bin/sendevent | Cause: This is output from a "sendevent" (event generator) command. Action: Check adjacent log messages for more details. |
006104 | ERROR | COMMAND OUTPUT: $LKROOT/bin/sendevent | Cause: This is output from a "sendevent" (event generator) command. Action: Check adjacent log messages for more details. |
006502 | ERROR | CPU usage has exceeded the threshold ($threshold%) for $count check cycles. | Cause: High CPU usage Action: Check /var/log/lifekeeper.log for troubleshooting. Monitor system performance |
006504 | ERROR | Could not open /proc/meminfo | Cause: When the system tries to open the meminfo file, an error occurs. Action: Check file permissions , check /var/log/lifekeeper.log for more torubleshooting. |
006505 | ERROR | Memory usage has exceeded the threshold ($threshold%) for $count check cycles. | Cause: Memory usage high Action: Check system for requirements and usage. |
006508 | ERROR | [$SUBJECT event] mail returned $err | Cause: sendmail returned an error Action: Check mail settings. Check /var/log/lifekeeper.log for troubleshooting. |
006509 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: log the error message generated during the email sending failure Action: Investigate the logs at /var/log/lifekeeper.log for more information |
006511 | ERROR | snmptrap returned $err for Trap 190 | Cause: The SNMP network management node returned an error. Action: Check log for more specific information about the SNMP trap error code in /var/log/lifekeeper.log |
006512 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: snmp trap return <error code> Action: See logs in /var/log/lifekeeper.log for more details |
006514 | ERROR | [$SUBJECT event] mail returned $err | Cause: Mail return error for OSU quickcheck Action: Check /var/log/lifekeeper.log for more information. |
006515 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: OSU quickcheck mail error return Action: Check /var/log/lifekeeper.log for troubleshooting. |
006517 | ERROR | snmptrap returned $err for Trap 200 | Cause: SNMP trap 200 error check Action: Check /var/log/lifekeeper.log for troubleshooting. |
006518 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: snmp call returned an error for trap 200 Action: Check /var/log/lifekeeper.log for troubleshooting. |
006520 | ERROR | Failed to update error count in $cpu_file: $! | Cause: System cannot open cpu_file. Either the file did not exist at the time, or other system related issues ocured. Action: Check /var/log/lifekeeper.log for troubleshooting. |
006521 | ERROR | Failed to update error count in $mem_file: $! | Cause: Could not update the mem_file. Action: Either the file did not exist, permission issue, or other system related issues. Check /var/log/lifekeeper.log for troubleshooting. |
006523 | ERROR | The SNHC_CPUCHECK_THRESHOLD setting is not valid. Please set SNHC_CPUCHECK_THRESHOLD to a value between 10 and 99 in /etc/default/LifeKeeper. If not set, the value will default to 99. | Cause: The SNHC_CPUCHECK_THRESHOLD setting is not valid Action: Check SNHC_CPUCHECK_THRESHOLD in /etc/default/LifeKeeper |
006524 | ERROR | The SNHC_CPUCHECK_TIME setting is not valid. Please set SNHC_CPUCHECK_TIME to a value between 1 and 100 in /etc/default/LifeKeeper. If not set, the value will default to 1. | Cause: SNHC_CPUCHECK_TIME setting is not valid Action: Please set SNHC_CPUCHECK_TIME to a value between 1 and 100 in /etc/default/LifeKeeper. |
006525 | ERROR | The SNHC_MEMCHECK_THRESHOLD setting is not valid. Please set SNHC_MEMCHECK_THRESHOLD to a value between 10 and 99 in /etc/default/LifeKeeper. If not set, the value will default to 99. | Cause: SNHC_MEMCHECK_THRESHOLD setting is not valid Action: set SNHC_MEMCHECK_THRESHOLD to a value between 10 and 99 in /etc/default/LifeKeeper. |
006526 | ERROR | The SNHC_MEMCHECK_TIME setting is not valid. Please set SNHC_MEMCHECK_TIME to a value between 1 and 100 in /etc/default/LifeKeeper. If not set, the value will default to 1. | Cause: SNHC_MEMCHECK_TIME setting is not valid. Action: SNHC_MEMCHECK_TIME to a value between 1 and 100 in /etc/default/LifeKeeper. |
006528 | ERROR | Could not open /proc/stat | Cause: Could not open /proc/stat Action: Either the file did not exist, permission issue, or other system related issues. Check /var/log/lifekeeper.log for troubleshooting. |
006529 | ERROR | Could not open /proc/stat | Cause: Could not open /proc/stat Action: Either the file did not exist, permission issue, or other system related issues. Check /var/log/lifekeeper.log for troubleshooting. |
006530 | ERROR | Could not use $tmp_path | Cause: If the directory for $tmp_path doesn't exist the system tries to create the dir. IF the creation fails this error is logged. Action: Check /var/log/lifekeeper.log for troubleshooting. |
007053 | ERROR | malloc failed. Assume that it is a monitoring target device. | Cause: malloc call failed. Action: N/A |
007058 | ERROR | %s: %s failed on ‘%s’, result:%d, Sense Key = %d. | Cause: A SCSI device couldn’t be reserved or have its status checked. This may be because the storage is malfunctioning or because the disk has been reserved by another server. Action: Check adjacent log messages for more details and verify that resources are being handled properly. |
007059 | ERROR | %s: %s failed on ‘%s’, result:%d. | Cause: A SCSI device couldn’t be reserved or have its status checked. This may be because the storage is malfunctioning or because the disk has been reserved by another server. Action: Check adjacent log messages for more details and verify that resources are being handled properly. |
007060 | EMERG | %s: failure on device ‘%s’. SYSTEM HALTED. | Cause: A SCSI device couldn’t be reserved or have its status checked. This may be because the storage is malfunctioning or because the disk has been reserved by another server. THE SERVER WILL BE REBOOTED/HALTED. Action: Verify that the storage is functioning properly and, if so, that resources were handled properly and have been put in service on another server. |
007072 | ERROR | %s: failed to open SCSI device ‘%s’, initiate recovery. errno=0x%x, retry count=%d. | Cause: The protected SCSI device could not be opened. The device may be failing or may have been removed from the system. Action: The system will be halted or a failover to the backup node will be initiated. The default action in this case is a failover, but this can be modified with the SCSIERROR tunable. |
007073 | ERROR | %s: failed to open SCSI device ‘%s’, RETRY. errno=%d, retry count=%d. | Cause: The protected SCSI device could not be opened. The device may be failing or may have been removed from the system. Action: This error is not critical. The operation will be retried in 5 seconds. If the problem persists, the system will perform a halt or resource failover. |
007075 | ERROR | %s: RESERVATION CONFLICT on SCSI device ‘%s’. ret=%d, errno=0x%x, retry count=%d. | Cause: A SCSI device couldn’t be reserved due to a conflict with another server. This may be because the storage is malfunctioning or because the disk has been reserved by another server. Action: Check adjacent log messages for more details and verify that resources are handled properly. |
007077 | ERROR | %s: DEVICE FAILURE on SCSI device ‘%s’, initiate recovery. ret=%d, errno=0x%x, retry count=%d. | Cause: A SCSI device couldn’t be reserved or have its status checked. This may be because the storage is malfunctioning or because the disk has been reserved by another server. Action: Check adjacent log messages for more details and verify that resources are handled properly. |
007078 | ERROR | %s: DEVICE FAILURE on SCSI device ‘%s’, RETRY. ret=%d, errno=0x%x, retry count=%d. | Cause: A SCSI device couldn’t be reserved or have its status checked. This may be because the storage is malfunctioning or because the disk has been reserved by another server. Action: Check adjacent log messages for more details and verify that resources are handled properly. |
010002 | WARN | flag $flag not present, send message again. | Cause: This message indicates an incomplete process that will be retried. Action: Check adjacent log messages for repeated errors. |
010003 | ERROR | COMMAND OUTPUT: $LKBIN/ins_remove | Cause: This message is part of the output from an "ins_remove" command. Action: Check adjacent log messages for more details. This may not be a true error. |
010006 | WARN | flg_list -d $i took more than $pswait seconds to complete… | Cause: A flag list operation on a server took much longer than expected. There may be a problem communicating with the other server. Action: Check adjacent log messages for more details. |
010007 | ERROR | flag $flag not present, switchovers may occur. | Cause: One of the servers in the cluster could not be told to disallow failover operations from the current server. Action: Check adjacent log messages for more details and monitor the cluster for unexpected behavior. |
010008 | WARN | flag $flag not present, send message again. | Cause: A process is incomplete but will be retried. Action: Check adjacent log messages for more details and for repeated warnings/errors. |
010023 | FATAL | LifeKeeper failed to initialize properly. | Cause: There was a fatal error while attempting to start LifeKeeper. Action: Check adjacent log messages for more details. |
010025 | ERROR | `printf ‘Unable to get a unique tag name on server "%s" for template resource "%s"’ $MACH $DISK` | Cause: A suitable tag during the create process for a storage resource could not be automatically generated. Action: Check adjacent log messages for more details. Retry the operation if there are no other errors. |
010034 | FATAL | Unable to start lcm. | Cause: A core component of the software could not be started. Action: Check adjacent log messages for more details and try to resolve the reported problem. |
010038 | WARN | Waiting for LifeKeeper core components to initialize has exceeded 10 seconds. Continuing anyway, check logs for further details. | Cause: Some parts of the software are taking longer than expected to start up. Action: Perform the steps listed in the message text. |
010039 | WARN | Waiting for LifeKeeper core components to initialize has exceeded 10 seconds. Continuing anyway, check logs for further details. | Cause: Some parts of the software are taking longer than expected to start up. Action: Perform the steps listed in the message text. |
010046 | ERROR | The dependency creation failed on server $SERVER:" `cat $TEMP_FILE` | Cause: A dependency relationship could not be created on the given server. Action: Check the logs for related errors and try to resolve the reported problem |
010063 | ERROR | $REMSH error | Cause: A command to request data to backup from another server failed. Action: Check the logs for related errors and try to resolve the reported problem. |
010085 | ERROR | lkswitchback($MACH): Automatic switchback of \"$tag\" failed | Cause: The resource was not switched back over as expected. Action: Check the logs for related errors and try to resolve the reported problem. |
010102 | ERROR | admin machine not specified | Cause: Invalid parameters were specified for the "getlocks" operation. Action: Verify the parameters and retry the operation. If this error happens during normal operation, contact Support. |
010107 | WARN | Lock for $m is ignored because system is OOS | Cause: A lock was ignored because the system for which the lock was created is not alive. Action: Check the logs for related errors. This may be a harmless error. |
010108 | ERROR | lock acquisition timeout | Cause: Acquiring a lock tool longer than expected/allowed. Action: Check the logs for related errors and try to resolve the reported problem. |
010109 | ERROR | could not get admin locks." `cat /tmp/ER$$` | Cause: The software failed to acquire a lock that is required to manage resources. Action: Check the logs for related errors and try to resolve the reported problem. |
010112 | ERROR | lcdrcp failed with error no: $LCDRCPRES | Cause: A file could not be copied to another server. Action: Check the logs for related errors and try to resolve the reported problem. |
010116 | ERROR | unable to set !lkstop flag | Cause: A flag could not be set to indicate that the server is being stopped by user request. Action: Check the logs for related errors and try to resolve the reported problem. |
010121 | ERROR | Extended logs aborted due to a failure in opening $destination. ($syserrmsg | Cause: The execution of utility "lkexterrlog" failed when opening the extended log file. Action: Check the logs for related errors and try to resolve the reported problem. |
010124 | ERROR | lkswitchback($MACH): Automatic switchback of \"$loctag\" to $UNAME is not allowed${delay_msg}. | Cause: One or more resources in the hierarchy with $loctag are not able to switchback. Action: Manually switch the hierarchy back when all resources are able. |
010125 | WARN | COMMAND OUTPUT: $LKROOT/bin/lkswitchback | Cause: Switchback can not be completed at this time and will be retried in time specified in the message. Action: None if retry should be retried. To stop delayed retry cancel the "at" job. |
010132 | ERROR | Unable to retrieve reservation id from "%s". Error: "%s". Attempting to regenerate. | Cause: Unable to open the file /opt/LifeKeeper/config/.reservation_id to retrieve the unique id used for SCSI 3 persistent reservations. Action: None. An attempt will be made to regenerate the ID and update the file. |
010135 | ERROR | The current reservation ID of "%s" is not unique within the cluster. A new ID must be generated by running "%s/bin/genresid -g" on "%s". | Cause: The reservation id defined for the system is not unique within the cluster and cannot be used. Action: Take all resources out of service on this node and then run "/opt/LifeKeeper/bin/genresid -g" to generate a unique reservation id. |
010136 | ERROR | Unable to store reservation id in "%s". Error: "%s" | Cause: Unable to open the file /opt/LifeKeeper/config.reservation_id to store the unique id used for SCSI 3 persistent reservations. Action: Correct the error listed as the reason the open failed and then take all resources out of service on this node and then run "/opt/LifeKeeper/bin/genresid -g" to generate a new unique reservation id. |
010137 | ERROR | Failed to generate a reservation ID that is unique within the cluster. | Cause: The generated reservation id is already defined on another node in the cluster and must be unique within the cluster. Action: Take all resources out of service on this node and then run "/opt/LifeKeeper/bin/genresid -g" to generate a new unique reservation id. |
010138 | ERROR | $message | Cause: Log level determiner. Action: N/A |
010139 | WARN | $message | Cause: loglevel determiner for $WARNING |
010140 | ERROR | $COMMAND_SNMPTRAP returned $exitcode for Trap $trap:$result | Cause: Send SNMP trap returned status code other than 0 (success) Action: Check /var/log/lifekeeper.log for troubleshooting. |
010141 | ERROR | LK_TRAP_MGR is specified in /etc/default/LifeKeeper, but $COMMAND_SNMPTRAP command is not in PATH. | Cause: snmptrap command is not in path Action: Check /var/log/lifekeeper.log for troubleshooting. |
010142 | ERROR | $COMMAND_EMAIL returned $exitcode:$result | Cause: Could not send email notification to email alias. Action: Check /var/log/lifekeeper.log for troubleshooting. |
010143 | ERROR | LK_NOTIFY_ALIAS is specified in /etc/default/LifeKeeper, but $COMMAND_EMAIL command is not in PATH. | Cause: <mail> command is not in $PATH Action: manually update $PATH to include mail command, reinstall / update lifekeeper. |
010144 | ERROR | can’t opendir $LICENSE_DIR: $! | Cause: tried to open lk license dir but encountered an error. Action: Check if /var/Lifekeeper/license directory exits. IF not, re-create the file manually or reinstall Lifekeeper. |
010145 | ERROR | lktest failed | Cause: lktest failed and Lifekeeper is still running. Action: check the Lifekeeper logs for errors in /var/log/Lifekeeper.log. Most likely Lifekeeper is still running. |
010146 | ERROR | lkcheck failed | Cause: if lkcheck is hung or has not run for a while then trigger lkcheck failed. Action: Lifekeeper may have crashed. Check /var/log/lifekeeper.log for troubleshooting. |
010147 | ERROR | ins_list failed: exit code = $exit_code | Cause: While running ins_list LK encountered an error Action: Check the healthy of Lifekeeper via lcdstatus |
010159 | ERROR | Maintenance mode disable currently in progress, can’t enable maintenance mode. If this problem persists, consider using the —force option. | Cause: maintenance mode disable is currently in progress. Action: N/A |
010160 | ERROR | Maintenance mode enable currently in progress, can’t disable maintenance mode. If this problem persists, consider using the —force option. | Cause: Maintenance mode enable currently in progress Action: N/A |
010161 | ERROR | $tag is not a valid resource tag on the local machine, aborting. Please check the spelling and try again. | Cause: The user passes —tag and the tag is not valid Action: Check the tag passed to ensure it's valid. |
010163 | ERROR | $cmd script not found or not executable on system $sys. | Cause: The component wasn't found on the system for $sys Action: Check /var/log/lifekeeper.log for troubleshooting. |
010165 | ERROR | An error occurred while running \’$LKROOT/lkadm/subsys/appsuite/sap/bin/$cmd -m=${opt_mode}${tag_cmd}${force_cmd}\’ on system $sys (exit code: $remexec_ret). Please inspect the logs on that system for more information. | |
010168 | WARN | Maintenance mode was not fully ${opt_mode}d for at least one resource on system $sys. | Cause: Maintenance mode was not set properly for the system Action: Check /var/log/lifekeeper.log for troubleshooting. |
010172 | ERROR | Maintenance mode was not fully ${opt_mode}d for the requested resources on at least one system in the cluster. | Cause: Maintenance mode was not set properly on at least one or more systems Action: Check /var/log/lifekeeper.log for troubleshooting. |
010173 | ERROR | LifeKeeper is not running on system $me. Unable to ${opt_mode} maintenance mode. Aborting. | Cause: LK is not running on the system while trying to enable or disable maintenance mode. Action: run lcdstatus for more information about the status of Lifekeeper. |
010179 | ERROR | Maintenance mode action \’${opt_mode}\’ did not complete successfully for resource $tag on system $me. | Cause: Setting maintenance mode did not successfully set for the given resource Action: Check /var/log/lifekeeper.log for troubleshooting. |
010181 | ERROR | An error occurred while attempting maintenance mode action \’${opt_mode}\’ on system $me for resources: @{local_hier_tags}. | Cause: An error occurred while attempting maintenance mode action. Action: Check /var/log/lifekeeper.log for troubleshooting. |
010187 | ERROR | Resource $tag has not been extended to system $sys. | Cause: The system was not in the equivalency list. Action: Could be caused by many issues. Check /var/log/lifekeeper.log for troubleshooting. |
010196 | ERROR | lock acquisition deadlock | Cause: lock acquisition deadlock occurred Action: Check /var/log/lifekeeper.log for troubleshooting. |
010222 | ERROR | scsifree(%s): LKSCSI_Release(%s) unsuccessful | Cause: A SCSI device that appeared to be reserved was not released as expected. Action: Check the logs for related errors and try to resolve any reported problems. This error may be benign if the system is functioning properly. |
010231 | ERROR | scsiplock(%s): reserve failed. | Cause: A reservation on a SCSI device could not be acquired. Action: Check the logs for related errors and try to resolve the reported problem. |
010250 | ERROR | Failed to exec command ‘%s’ | Cause: The "lklogmsg" tool failed to execute a sub-command {command}. Action: Check the logs for related errors and try to resolve any reported problems. Verify that the sub-command exists and is a valid command or program. If this message happens during normal operation, contact Support. |
010256 | ERROR | scsi_tur(%s): open failed. | |
010260 | ERROR | scsi_tur(%s): test unit ready failed. | |
010402 | EMERG | local recovery failure on resource $opts{‘N’}, trigger VMware HA… | Cause: When in LifeKeeper Single Server Protection operation, a resource could not be recovered and VMware-HA is about to be triggered to handle the failure (if VMware-HA is enabled). Action: No action is required. VMware should handle the failure. |
010413 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: This is the output from an "snmptrap" command that may have failed. Action: Check the logs for related errors and try to resolve the reported problem. |
010420 | EMERG | local recovery failure on resource $opts{‘N’}, trigger reboot… | Cause: When in LifeKeeper Single Server Protection operation, a resource could not be recovered and a reboot is about to be triggered to handle the failure. Action: No action is required. |
010440 | ERROR | [$SUBJECT event] mail returned $err | Cause: This indicates a notification email could not be sent via the "mail" command. Action: Check the logs for related errors and try to resolve the reported problem. |
010443 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: This is the output from a "mail" command that may have failed. Action: Check the logs for related errors and try to resolve the reported problem. |
010445 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: This is the output from a "mail" command that may have failed. Action: Check the logs for related errors and try to resolve the reported problem. |
010463 | ERROR | LifeKeeper: name of machine is not specified, ARGS=$ARGS | Cause: Invalid arguments were specified for the "comm_down" event. Action: Check your LifeKeeper configuration and retry the operation. |
010471 | ERROR | COMM_DOWN: Attempt to obtain local comm_down lock flag failed | Cause: During the handling of a communication failure with another node, a local lock could not be acquired. This will likely stop a failover from proceeding. Action: Check the logs for related errors and try to resolve the reported problem. If failovers are not taking place properly, contact Support. |
010482 | ERROR | LifeKeeper: name of machine is not specified, ARGS=$ARGS | Cause: Invalid arguments were specified for the "comm_up" event. Action: Check your LifeKeeper configuration and retry the operation. |
010484 | WARN | flg_list -d $MACH check timed-out ($delay seconds). | Cause: "flg_list" command reached its timeout value {delay} seconds. |
010487 | WARN | flg_list -d $MACH check timed-out, unintended switchovers may occur. | Cause: "flg_list" command reached its timeout value. Action: Switch back the resource tree if unintended switchover occurs. |
010492 | WARN | $m | Cause: One of other servers looked dead to this server {server}, but witness servers did not agree. Action: Ensure other server is dead and switch over the resource manually. |
010494 | ERROR | LifeKeeper: COMM_UP to machine $MACH completed with errors. | Cause: An unexpected failure occurred during "COMM_UP" event. Action: Check adjacent log messages for more details. |
010503 | ERROR | lcdrecover hung or returned error, attempting kill of process $FPID | Cause: "lcdrecover" took too long or errored out. |
010506 | ERROR | Intelligent Switchback Check Failed | Cause: Failed 5 times to perform "lcdrecover." Action: Switch over the resource tree manually. |
010535 | ERROR | LifeKeeper: name of machine is not specified, ARGS=$ARGS | |
010536 | ERROR | Resource initialization is in-progress, comm_up processing for $MACH will exit without bringing resources in-service. | |
010600 | ERROR | removing hierarchy remnants | |
010627 | WARN | Equivalency Trim: does not have a full complement of equivalencies. Hierarchy will be unextended from | Cause: An error occurred during extend. Action: Check the LifeKeeper log. |
010629 | WARN | Your hierarchy exists on only one server. Your application has no protection until you extend it to at least one other server. | Cause: Hierarchy is on one server. Action: Extend the hierarchy to a second server. |
010712 | ERROR | Unextend hierarchy failed | Cause: A resource hierarchy failed to be unextended from a server. Action: Check the logs for related errors and try to resolve the reported problem. |
010746 | ERROR | $ERRMSG Target machine \"$TARGET_MACH\" does not have an active LifeKeeper communication path to machine \"$aMach\" in the hierarchy." >&2 | Cause: A hierarchy cannot be unextended because the target server does not have adequate communication with the other servers in the cluster. Action: Check the logs for related errors and try to resolve the reported problem. Ensure that all servers have communication paths to each other. |
010763 | ERROR | lock failed | |
010800 | ERROR | sub_url is not specified. | |
010804 | ERROR | AWS_IMDS_VERSION in /etc/default/LifeKeeper should be 1 or 2. \"$version\" is an illegal value. | |
010807 | WARN | AWS_IMDS_TTL_SEC in /etc/default/LifeKeeper is an illegal value (\"$ttl_sec\"). It is dealt with as $default_ttl_sec(sec). | |
010808 | WARN | AWS_IMDS_TTL_SEC in /etc/default/LifeKeeper is out of range (1 to 21600) value (\"$ttl_sec\"). It is dealt with as $default_ttl_sec(sec). | |
010812 | ERROR | IMDS v2 failed to retrieve a secret token. | |
010815 | ERROR | curl call failed. (err=$err)(output=@response | |
010816 | ERROR | curl response error. (HTTP status=[$http_status], response=[@response] | |
011000 | ERROR | appremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011001 | ERROR | depremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011002 | ERROR | eqvremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011003 | ERROR | flgremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011004 | WARN | Illegal creation of resource | Cause: This will not occur under normal circumstances. |
011009 | ERROR | insremote: unknown change field command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011010 | ERROR | insremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011011 | FATAL | %s | Cause: LifeKeeper could not get IPC key. Action: Check adjacent log messages for more details. |
011012 | FATAL | semget(%s,%c | Cause: LifeKeeper could not get semaphore set id. Action: Check adjacent log messages for more details. |
011013 | FATAL | shmget(%s,%c | Cause: System could not allocate a shared memory segment. Action: Check adjacent log messages for more details. |
011014 | FATAL | prefix_lkroot("out" | Cause: A system error has occurred while accessing /opt/LifeKeeper/out. Action: Determine why /opt/LifeKeeper/out is not accessible. |
011015 | ERROR | DEMO_UPGRADE_MSG | Cause: You are running a demo license. Action: Contact Support to obtain a new license. |
011016 | ERROR | lic_single_node_msg | Cause: You have a license for LifeKeeper Single Server Protection but you do not have LifeKeeper Single Server Protection installed. Action: Either install LifeKeeper Single Server Protection or obtain a license that matches the product you are running. |
011018 | ERROR | lic_init_fail_msg, lc_errstring(lm_job | Cause: License manager initialization failed. Action: Check adjacent log messages for more details. |
011020 | EMERG | lic_init_fail_msg, lc_errstring(lm_job | Cause: License manager initialization failed. Action: Check adjacent log messages for more details. |
011021 | EMERG | lic_error_msg, lc_errstring(lm_job | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011022 | EMERG | lic_error_msg, lc_errstring(lm_job | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011023 | EMERG | lic_no_rest_suite, "" | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011024 | EMERG | lic_error_msg, lc_errstring(lm_job | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011025 | EMERG | lic_no_license, "" | Cause: LifeKeeper could not find valid license keys. Action: Ensure license keys are valid for the server and retry the operation. |
011026 | EMERG | lic_error_msg, lc_errstring(lm_job | Cause: There is an unknown problem with your license. Action: Contact Support to obtain a new license. |
011027 | EMERG | lic_no_license, "" | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011028 | ERROR | lang_error_msg | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011029 | FATAL | can’t set reply system | Cause: A message failed to be sent. Action: Check adjacent log messages for more details. This may be a temporary error. |
011030 | FATAL | can’t set reply mailbox | Cause: A message failed to be sent. Action: Check adjacent log messages for more details. This may be a temporary error. |
011031 | ERROR | Failure reading output of ‘%s’ on behalf of %s | Cause: A system error has occurred while accessing temporary file /tmp/OUT.{pid}. Action: Determine why /tmp/OUT.{pid} is not accessible. |
011032 | ERROR | Failure reading output of ‘%s’ | Cause: A system error has occurred while accessing temporary file /tmp/ERR.{pid}. Action: Determine why /tmp/ERR.{pid} is not accessible. |
011033 | ERROR | event \"%s,%s\" already posted for resource with id \"%s\" | Cause: This message is for information only. |
011034 | ERROR | no resource has id of \"%s\" | Cause: LifeKeeper could not find the {id} resource. Action: Verify the parameters and retry the "sendevent" operation. |
011044 | ERROR | flagcleanup:fopen(%s | Cause: A system error has occurred while reading /opt/LifeKeeper/config/flg. Action: Determine why /opt/LifeKeeper/config/flg is not readable. |
011045 | ERROR | flagcleanup:fopen(%s | Cause: A system error has occurred while writing /opt/LifeKeeper/config/flg. Action: Determine why /opt/LifeKeeper/config/flg is not writable. |
011046 | ERROR | flagcleanup:fputs(%s | Cause: A system error has occurred while writing /opt/LifeKeeper/config/flg. Action: Determine why /opt/LifeKeeper/config/flg is not writable. |
011047 | ERROR | flagcleanup:rename(%s,%s | Cause: A system error has occurred while renaming /opt/LifeKeeper/config/.flg to /opt/LifeKeeper/config/flg. Action: Determine why /opt/LifeKeeper/config/flg was not able to be renamed. |
011048 | ERROR | flagcleanup:chmod(%s | Cause: A system error has occurred while changing permissions in /opt/LifeKeeper/config/flg. Action: Determine why LifeKeeper could not change permissions in /opt/LifeKeeper/config/flg. |
011049 | ERROR | License check failed with error code %d | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011051 | ERROR | lcdinit: clearing Disk Reserve file failed | Cause: A system error has occurred while writing /opt/LifeKeeper/subsys/scsi/resources/disk/disk.reserve. Action: Determine why /opt/LifeKeeper/subsys/scsi/resources/disk/disk.reserve is not writable. |
011052 | FATAL | malloc() failed | Cause: The system could not allocate memory for LifeKeeper. Action: Increase the process limit for the data segment. |
011053 | FATAL | lcm_is_unavail | Cause: A system error has occurred while writing /tmp/LK_IS_UNAVAIL. Action: Determine why /tmp/LK_IS_UNAVAIL is not writable. |
011054 | FATAL | lk_is_unavail | Cause: A system error has occurred while writing /opt/LifeKeeper/config/LK_IS_ON. Action: Determine why /opt/LifeKeeper/config/LK_IS_ON is not writable. |
011055 | FATAL | usr_alarm_config_LK_IS_ON | Cause: A system error has occurred while writing /tmp/LCM_IS_UNAVAI. Action: Determine why /tmp/LCM_IS_UNAVAI is not writable. |
011056 | ERROR | License check failed with error code %d | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
011057 | ERROR | lcdremote: unknown command type %d(‘%c’)\n | Cause: A message failed to be received. Action: Check adjacent log messages for more details. This may be a temporary error, but if this error continues and servers cannot communicate, verify the network configuration on the servers. |
011059 | FATAL | Could not write to: %s | Cause: A system error has occurred while accessing /opt/LifeKeeper/config/LK_START_TIME. Action: Determine why /opt/LifeKeeper/config/LK_START_TIME is not accessible. |
011060 | FATAL | received NULL message | Cause: A message failed to be received. Action: Check adjacent log messages for more details. This may be a temporary error, but if this error continues and servers cannot communicate, verify the network configuration on the servers. |
011061 | ERROR | unknown data type %d(‘%c’) on machine \"%s\"\n | Cause: A message failed to be received. Action: Check adjacent log messages for more details. This may be a temporary error, but if this error continues and servers cannot communicate, verify the network configuration on the servers. |
011062 | WARN | LifeKeeper shutdown in progress. Unable to perform failover recovery processing for %s\n | Cause: LifeKeeper was unable to fail over the given resource during shutdown. Action: Switch over the resource tree to other server manually. |
011063 | WARN | LifeKeeper resource initialization in progress. Unable to perform failover recovery processing for %s\n | Cause: LifeKeeper was unable to fail over the given resource during start up. Action: Switch over the resource tree manually after LifeKeeper starts up. |
011068 | ERROR | ERROR on command %s | Cause: An error occurred while running the "rlslocks" command. Action: Check adjacent messages for more details. |
011070 | ERROR | ERROR on command %s | Cause: An error occurred while running the "getlocks" command. Action: Check adjacent log messages for more details. |
011080 | FATAL | out of memory | Cause: Internal error. Action: Increase the process limit for the data segment. |
011081 | FATAL | Failed to ask ksh to run: %s | Cause: A system error has occurred while invoking ksh. Action: Make sure the pdksh (v8.0 and earlier) or the steeleye-pdksh (v81 and later) package is installed. |
011082 | ERROR | Failed to remove: %s | Cause: A system error has occurred while removing /tmp/LCM_IS_UNAVAIL. Action: Determine why /tmp/LCM_IS_UNAVAIL is not removable. |
011083 | ERROR | Failed to remove: %s | Cause: A system error has occurred while trying to unlink /tmp/LK_IS_UNAVAIL. Action: Determine why /tmp/LK_IS_UNAVAIL is not removable. |
011084 | FATAL | Failed to generate an IPC key based on: %s | Cause: A system error has occurred while accessing /opt/LifeKeeper. Action: Determine why /opt/LifeKeeper is not accessible. |
011085 | ERROR | semget(%s,%c) failed | Cause: A system error has occurred while removing a semaphore. Action: Try to remove the semaphore manually. |
011086 | ERROR | shmget(%s,%c) failed | Cause: A system error has occurred while removing a shared memory segment. Action: Try to remove the shared memory segment manually. |
011087 | ERROR | semctl(IPC_RMID) failed | Cause: A system error has occurred while removing a semaphore. Action: Try to remove the semaphore manually. |
011088 | ERROR | shmctl(IPC_RMID) failed | Cause: A system error has occurred while removing a shared memory segment. Action: Try to remove the shared memory segment manually. |
011089 | FATAL | Execution of lcdstatus on remote system <%s> failed\n | Cause: The remote {node} is down, inaccessible via the network or some other system problem occurred on the remote node. Action: Bring the remote node back online, or check adjacent messages for additional information, or check the logs on the remote node for additional information. |
011090 | FATAL | Cause: Internal error. Action: Perform the steps listed in the message text. |
|
011091 | WARN | Cause: This will not occur under normal circumstances. |
|
011092 | FATAL | Cause: There is a problem with your license. Action: Perform the steps listed in the message text. |
|
011093 | FATAL | Cause: There is a problem with your license. Action: Perform the steps listed in the message text. |
|
011094 | FATAL | Cause: There is a problem with your license. Action: Perform the steps listed in the message text. |
|
011095 | FATAL | Cause: There is a problem with your license. Action: Perform the steps listed in the message text. |
|
011096 | FATAL | Cause: There is a problem with your license. Action: Perform the steps listed in the message text. |
|
011097 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011098 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011099 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011100 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011101 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011102 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011103 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011104 | FATAL | Cause: There is a problem with your license. Action: Contact Support to obtain a new license. |
|
011105 | FATAL | Cause: There is a problem with your license. Action: Perform the steps listed in the message text. |
|
011111 | ERROR | action \"%s\" on resource with tag \"%s\" has failed | Cause: The {action} for resource {tag} has failed. Action: See adjacent error messages for further details. |
011112 | ERROR | Cause: LifeKeeper could not find the network device. Action: Check your LifeKeeper configuration. |
|
011113 | ERROR | netremote: unknown subcommand type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011114 | ERROR | netremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011117 | ERROR | sysremote: system \"%s\" not found on \"%s\" | Cause: An invalid system name was provided. Action: Recheck the system name and rerun the command. |
011118 | ERROR | sysremote: unknown subcommand type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011119 | ERROR | sysremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011120 | ERROR | typremote: unknown command type %d(‘%c’)\n | Cause: Internal error. Action: Try restarting the product. |
011129 | ERROR | Failure during run of ‘%s’ on behalf of %s | Cause: Command execution failed. Action: See message details to determine the problem. |
011130 | ERROR | %s | Cause: The command {command} produced unexpected output. Action: Action should be determined by the content of adjacent error messages. |
011131 | EMERG | demo_update_msg | Cause: There is a problem with your demo license. Action: Contact Support to obtain a new license. |
011132 | EMERG | demo_tamper_msg | Cause: You have a demo license and clock tampering has been detected. Action: Contact Support to obtain a new license. |
011133 | EMERG | demo_tamper_msg | Cause: You have a demo license and clock tampering has been detected. Action: Contact Support to obtain a new license. |
011134 | EMERG | demo_expire_msg | Cause: The demo license for this product has expired. Action: Contact Support to obtain a new license. |
011135 | EMERG | demo_tamper_msg | Cause: You have a demo license and clock tampering has been detected. Action: Contact Support to obtain a new license. |
011136 | EMERG | buf | Cause: You are running a demo license. Action: Contact Support to obtain a new license. |
011138 | EMERG | buf | Cause: You are running a demo license. Action: Contact Support to obtain a new license. |
011142 | WARN | LifeKeeper Recovery Kit %s license key NOT FOUND | Cause: An Application Recovery Kit license for {kit} was not found. Action: Contact Support to obtain a new license. |
011150 | ERROR | COMMAND OUTPUT: %s | Cause: The command "eventslcm" produced unexpected output. Action: Check the logs for related errors and try to resolve the reported problem. |
011151 | EMERG | &localebuf3 | Cause: This version of the LifeKeeper core package is restricted to being used within the territories of the People’s Republic of China or Japan. |
011152 | EMERG | Localized license failure | Cause: There was a mis-match between your locale and the locale for which the product license was created. Action: Contact Support to obtain a new license which matches your locale. |
011154 | EMERG | Single Node flag check failed. | Cause: You have a license for LifeKeeper Single Server Protection but you do not have LifeKeeper Single Server Protection installed. Action: Either install LifeKeeper Single Server Protection or obtain a license that matches the product you are running. |
011155 | EMERG | lic_master_exp_msg, "" | Cause: Your license key for this product has expired. Action: Contact Support to obtain a new license. |
011162 | EMERG | lic_restricted_exp_msg, "" | Cause: Your license key for this product has expired. Action: Contact Support to obtain a new license. |
011163 | EMERG | Single Node license check failed | Cause: You have a license for LifeKeeper Single Server Protection but you do not have LifeKeeper Single Server Protection installed. Action: Either install LifeKeeper Single Server Protection or obtain a license that matches the product you are running. |
011164 | EMERG | demo_expire_msg, DEMO_UPGRADE_MSG | Cause: Your license key for this product has expired. Action: Please contact Support to obtain a permanent license key for your product. |
011165 | ERROR | LifeKeeper initialize timed out in tag \"%s\" | |
015000 | ERROR | COMMAND OUTPUT: /opt/LifeKeeper/sbin/chpst | Cause: An error has ocurred with the "steeleye-lighttpd" process. Specific details of the error are included in the actual log message. Action: Correct the configration and "steeleye-lighttpd" will automatically be restarted. |
103001 | ERROR | LifeKeeper has detected an error while trying to determine the node number(s) of the DB partition server(s) for the instance | Cause: The db2nodes.cfg does not contain any server names. Action: Ensure the db2nodes.cfg is valid. |
103002 | ERROR | LifeKeeper was unable to get the version for the requested instance "%s" | Cause: "db2level" command did not return DB2 version. Action: Check your DB2 configuration. |
103003 | ERROR | LifeKeeper has detected an error while trying to determine the node number(s) of the DB partition server(s) for the instance | Cause: The DB2 Application Recovery Kit was unable to find any nodes for the DB2 instance. Action: Check your DB2 configuration. |
103004 | ERROR | Unable to get the information for resource "%s" | Cause: Failed to get resource information. Action: Check your LifeKeeper configuration. |
103005 | ERROR | Unable to get the information for resource "%s" | Cause: Failed to get resource information. Action: Check your LifeKeeper configuration. |
103006 | ERROR | Unable to get the instance information for resource "%s" | Cause: Failed to get the instance information. Action: Check your LifeKeeper configuration. |
103007 | ERROR | Unable to get the instance home directory information for resource "%s" | Cause: Failed to get the instance home directory path. Action: Check your LifeKeeper configuration. |
103008 | ERROR | Unable to get the instance type information for resource "%s" | Cause: The DB2 Application Recovery Kit found invalid instance type. Action: Check your LifeKeeper configuration. |
103009 | ERROR | LifeKeeper has encountered an error while trying to get the database configuration parameters for database \"$DB\" | Cause: There was an unexpected error running "db2 get db cfg for $DB" command. Action: Check the logs for related errors and try to resolve the reported problem. |
103012 | ERROR | LifeKeeper was unable to start the database server for instance "%s" | Cause: The requested startup of the DB2 instance failed. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "restore" operation. |
103013 | ERROR | LifeKeeper was unable to start the database server for instance "%s" | Cause: The requested startup of the DB2 instance failed. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "restore" operation. |
103015 | ERROR | An entry for the home directory "%s" of instance "%s" does not exist in "/etc/fstab" | Cause: The home directory of instance of Multiple Partition database should exist in "/etc/fstab". Action: Ensure the home directory exists in "/etc/fstab". |
103016 | ERROR | LifeKeeper was unable to mount the home directory for the DB2 instance "%s" | Cause: Failed to mount the home directory of instance of Multiple Partition database. Action: Ensure the home directory is mounted and retry the operation. |
103017 | ERROR | Unable to get the instance nodes information for resource "%s" | Cause: Failed to get the instance nodes. Action: Check your LifeKeeper configuration. |
103018 | ERROR | LifeKeeper was unable to start database partition server "%s" for instance "%s" | Cause: The requested startup of the DB2 instance failed. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "restore" operation. |
103020 | ERROR | LifeKeeper was unable to stop the database server for instance "%s" | Cause: The requested shutdown of the DB2 instance failed. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "remove" operation. |
103021 | ERROR | LifeKeeper was unable to stop the database server for instance "%s" | Cause: The requested shutdown of the DB2 instance failed. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "remove" operation. |
103023 | ERROR | Unable to get the instance nodes information for resource "%s" | Cause: Failed to get the instance nodes. Action: Check your LifeKeeper configuration. |
103024 | ERROR | LifeKeeper was unable to stop database partition server "%s" for instance "%s" | Cause: The requested shutdown of the DB2 instance failed. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "remove" operation. |
103026 | ERROR | Unable to get the instance nodes information for resource "%s" | Cause: Failed to get the instance nodes. Action: Check your LifeKeeper configuration. |
103027 | FATAL | The argument for the DB2 instance is empty | Cause: Invalid parameters were specified for the create operation. Action: Verify the parameters and retry the operation. |
103028 | FATAL | Unable to determine the DB2 instance home directory | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance home directory. Action: Ensure the instance owner name is same as the instance name and retry the operation. |
103029 | FATAL | Unable to determine the DB2 instance type | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance type. Action: Check your DB2 configuration. |
103030 | FATAL | LifeKeeper has detected an error while trying to determine the node number(s) of the DB partition server(s) for the instance | Cause: The DB2 Application Recovery Kit was unable to find any nodes for the DB2 instance. Action: Check your DB2 configuration. |
103031 | ERROR | The path "%s" is not on a shared filesystem | Cause: The instance home directory should be on a shared filesystem. Action: Ensure the path is on shared filesystem and retry the create operation. |
103032 | ERROR | LifeKeeper was unable to get the DB tablespace containers for instance "%s" or the log path for one of its databases | Cause: LifeKeeper could not determine the location of the database table space containers or verify that they are located in a path which is on a mounted filesystem. Action: Check the logs for related errors and try to resolve the reported problem. Correct any reported errors before retrying the "create" operation. |
103033 | ERROR | The path "%s" is not on a shared filesystem | Cause: The path of database table space container should be on a shared filesystem. Action: Ensure database table space container is on a shared filesystem and retry the operation. |
103034 | ERROR | A DB2 Hierarchy already exists for instance "%s" | Cause: An attempt was made to protect the DB2 instance that is already under LifeKeeper protection. Action: You must select a different DB2 instance for LifeKeeper protection. |
103035 | ERROR | The file system resource "%s" is not in-service | Cause: The file system which the DB2 resource depends on should be in service. Action: Ensure the file system resource is in service and retry the "create" operation. |
103036 | ERROR | Unable to create the hierarchy for raw device "%s" | Cause: LifeKeeper was unable to create the resource {raw device} . Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the "create" operation. |
103037 | ERROR | A RAW hierarchy does not exist for the tag "%s" | Cause: LifeKeeper was unable to find the raw resource {tag} . Action: Check your LifeKeeper configuration. |
103038 | ERROR | LifeKeeper was unable to create a dependency between the DB2 hierarchy "%s" and the Raw hierarchy "%s" | Cause: The requested dependency creation between the parent DB2 resource and the child Raw resource failed. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the "create" operation. |
103039 | ERROR | LifeKeeper could not disable the automatic startup feature of DB2 instance "%s" | Cause: An unexpected error occurred while attempting to update the DB2 setting. Action: The DB2AUTOSTART will need to be updated manually to turn off the automatic startup of the instance at system boot. |
103040 | ERROR | DB2 version "%s" is not installed on server "%s" | Cause: LifeKeeper could not find DB2 installed location. Action: Check your DB2 configuration. |
103041 | ERROR | The instance owner "%s" does not exist on target server "%s" | Cause: An attempt to retrieve the DB2 instance owner from template server during a "canextend" or "extend" operation failed. Action: Verify the DB2 instance owner exists on the specified server. If the user does not exist, it should be created with the same uid and gid on all servers in the cluster. |
103042 | ERROR | The instance owner "%s" uids are different on target server "%s" and template server "%s" | Cause: The user id on the target server {target server} for the DB2 instance owner {user} does not match the value of the user {user} on the template server {template server}. Action: The user ids for the DB2 instance owner {user} must match on all servers in the cluster. The user id mismatch should be corrected manually on all servers before retrying the "canextend" operation. |
103043 | ERROR | The instance owner "%s" gids are different on target server "%s" and template server "%s" | Cause: The group id on the target server {target server} for the DB2 instance owner {user} does not match the value of the user {user} on the template server {template server}. Action: The group ids for the DB2 instance owner {user} must match on all servers in the cluster. The group id mismatch should be corrected manually on all servers before retrying the "canextend" operation. |
103044 | ERROR | The instance owner "%s" home directories are different on target server "%s" and template server "%s" | Cause: The home directory location of the user {user} on the target server {target server} does not match the DB2 instance owner’s home directory on the template server {template server}. Action: The home directory location of the DB2 instance owner {user} must match on all servers in the cluster. The location mismatch should be corrected manually on all servers before retrying the "canextend" operation. |
103045 | ERROR | LifeKeeper was unable to get the DB2 "SVCENAME" parameter for the DB2 instance | Cause: There was an unexpected error running "db2 get dbm cfg" command. Action: Check your DB2 configuration. |
103046 | ERROR | Unable to get the value of the DB2 "SVCENAME" parameter for the DB2 instance %s. | Cause: The DB2 "SVCENAME" parameter is set to null. Action: Check your DB2 configuration. |
103047 | ERROR | LifeKeeper was unable to get the contents of the "/etc/services" file on the server "%s" | Cause: "/etc/services" on the template server does not contain the service names for the DB2 instance. Action: The service names in "/etc/services" for the DB2 instance must match on all servers in the cluster. The service names mismatch should be corrected manually on all servers before retrying the "canextend" operation. |
103048 | ERROR | LifeKeeper was unable to get the contents of the "/etc/services" file on the server "%s" | Cause: "/etc/services" on the target server does not contain the service names for the DB2 instance. Action: The service names in "/etc/services" for the DB2 instance must match on all servers in the cluster. The service names mismatch should be corrected manually on all servers before retrying the "canextend" operation. |
103049 | ERROR | The "/etc/services" entries for the instance "%s" are different on target server "%s" and template server "%s" | Cause: The "/etc/services" entries for the instance are mismatched. Action: The service names in "/etc/services" for the DB2 instance must match on all servers in the cluster. The service names mismatch should be corrected manually on all servers before retrying the "canextend" operation. |
103050 | ERROR | The home directory "%s" for instance "%s" is not mounted on server "%s" | Cause: LifeKeeper could not find db2nodes.cfg for Multiple Partition instance. Action: Ensure the home directory is mounted and retry the operation. |
103051 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: Failed to get resource information from the template server. Action: Check your LifeKeeper configuration. |
103052 | ERROR | LifeKeeper was unable to add instance "%s" and/or its variables to the DB2 registry | Cause: There was an unexpected error running "db2iset" command. Action: Check the logs for related errors and try to resolve the reported problem. |
103053 | ERROR | Usage: %s instance | |
103054 | ERROR | Unable to determine the DB2 instance type | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance type. Action: Check your DB2 configuration. |
103055 | ERROR | LifeKeeper has detected an error while trying to determine the node number(s) of the DB partition server(s) for the instance | Cause: The DB2 Application Recovery Kit was unable to find any nodes for the DB2 instance. Action: Check your DB2 configuration. |
103056 | ERROR | Usage: %s instance | |
103058 | ERROR | Usage: %s instance | |
103059 | ERROR | Usage: %s instance | |
103060 | ERROR | Unable to determine the DB2 instance home directory | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance home directory. Action: Ensure the instance owner name is same as the instance name and retry the operation. |
103061 | ERROR | Unable to determine the DB2 instance type | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance type. Action: Check your DB2 configuration. |
103062 | ERROR | LifeKeeper has detected an error while trying to determine the node number(s) of the DB partition server(s) for the instance | Cause: The DB2 Application Recovery Kit was unable to find the node for the DB2 instance. Action: Check your DB2 configuration. |
103063 | ERROR | Unable to determine the DB2 install path | Cause: The DB2 Application Recovery Kit was unable to find DB2 for the instance. Action: Check your DB2 configuration. |
103064 | ERROR | Usage: nodes -t tag -a add_nodenum | nodes -t tag -d delete_nodenum | nodes -t tag -p | |
103065 | ERROR | Invalid input provided for "%s" utility operation, characters are not allowed. | Cause: Invalid parameters were specified for the "nodes" command. Action: Verify the parameters and retry the operation. |
103066 | ERROR | Unable to get the information for resource "%s" | Cause: LifeKeeper was unable to find the resource {tag}. Action: Verify the parameters and retry the operation. |
103067 | ERROR | The DB2 instance "%s" is not a EEE or Multiple Partition instance | Cause: The resource {tag} is single partition instance. Action: Verify the parameters and retry the operation. |
103069 | ERROR | Node "%s" is already protected by this hierarchy | Cause: Invalid parameters were specified for the "nodes" command. Action: Verify the parameters and retry the operation. |
103070 | ERROR | Node number "%s" is the last remaining node protected by resource "%s". Deleting all nodes is not allowed. | Cause: Invalid parameters were specified for the "nodes" command. Action: Verify the parameters and retry the operation. |
103071 | ERROR | LifeKeeper is unable to get the equivalent instance for resource "%s" | Cause: There was an unexpected error running "nodes" command. Action: Check the logs for related errors and try to resolve the reported problem. |
103072 | ERROR | Unable to set NodesInfo for resource "%s" on "%s" | Cause: There was an unexpected error running "nodes" command. Action: Check the logs for related errors and try to resolve the reported problem. |
103073 | ERROR | Unable to set NodesInfo for resource "%s" on "%s" | Cause: There was an unexpected error running "nodes" command. Action: Check the logs for related errors and try to resolve the reported problem. |
103074 | ERROR | Usage: %s instance | |
103075 | ERROR | Usage: %s instance | |
103076 | ERROR | Unable to determine the DB2 instance type | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance type. Action: Check your DB2 configuration. |
103077 | ERROR | Unable to determine the DB2 instance home directory | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance home directory. Action: Ensure the instance owner name is same as the instance name and retry the operation. |
103078 | ERROR | The database server is not running for instance "%s" | Cause: A process check for the DB2 instance did not find any processes running. Action: The DB2 instance must be started. |
103079 | ERROR | LifeKeeper has detected an error while trying to determine the node number(s) of the DB partition server(s) for the instance | Cause: The DB2 Application Recovery Kit was unable to find any nodes for the DB2 instance. Action: Check your DB2 configuration. |
103080 | ERROR | One or more of the database partition servers for instance "%s" is down | Cause: All database partition servers should be running. Action: Ensure all database partition servers are running and retry the operation. |
103081 | ERROR | DB2 local recovery detected another recovery process in progress for "%s" and will exit. | Cause: local recovery detected another recovery process in progress |
103082 | ERROR | Failed to create flag "%s" | Cause: An unexpected error occurred attempting to create a flag for controlling DB2 local recovery processing. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors. |
103083 | ERROR | Failed to remove flag "%s" | Cause: An unexpected error occurred attempting to remove a flag for controlling DB2 local recovery processing. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors. |
103084 | ERROR | Unable to determine the DB2 instance \"$Instance\" home directory | Cause: The DB2 Application Recovery Kit was unable to determine the DB2 instance home directory. Action: Ensure the instance owner name is same as the instance name and retry the operation. |
104002 | FATAL | $msg | Cause: This message indicates an internal software error. Action: The stack trace indicates the source of the error. |
104003 | FATAL | $self->Val(‘Tag’) . " is not an SDR resource" | Cause: A data replication action was attempted on a non data replication resource. Action: Check the logs for related errors and try to resolve the reported problem. |
104010 | ERROR | $self->{‘md’}: bitmap merge failed, $action | Cause: The bitmap merge operation has failed. Action: The target server may have the mirror and/or protected filesystem mounted, or the bitmap file may be missing on the target. Check the target server. |
104022 | ERROR | $argv1: mdadm failed ($ret | Cause: The "mdadm" command has failed to add a device into the mirror. Action: This is usually a temporary condition. |
104023 | ERROR | $_ | Cause: The message contains the output of the "mdadm" command. |
104025 | ERROR | failed to spawn monitor | Cause: The system failed to start the ‘mdadm -F’ monitor process. This should not happen under normal circumstances. Action: Reboot the system to ensure that any potential conflicts are resolved. |
104026 | ERROR | cannot create $md | Cause: The mirror device could not be created. Action: Ensure the device is not already in use and that all other parameters for the mirror creation are correct. |
104027 | ERROR | $_ | Cause: This message contains the "mdadm" command output. |
104035 | ERROR | Too many failures. Aborting resync of $md | Cause: The device was busy for an abnormally long period of time. Action: Reboot the system to be sure that the device is no longer busy. |
104036 | ERROR | Failed to start nbd-server on $target (error $port | Cause: The nbd-server process could not be started on the target server. Action: Ensure that the target disk device is available and that its Device ID has not changed. |
104037 | ERROR | Failed to start compression (error $port | Cause: The system was unable to start the ‘balance’ tunnel process or there was a network problem. Action: Ensure that the network is operating properly and that TCP ports in the range 10000-10512 are opened and unused. Ensure that the software is installed properly on all systems. |
104038 | ERROR | Failed to start nbd-client on $source (error $ret | Cause: The nbd-client process has failed to start on the source server. Action: Look up the reported errno value and try to resolve the problem reported. For example, an errno value of 110 means "Connection timed out", which may indicate a network or firewall problem. |
104039 | ERROR | Failed to add $nbd to $md on $source | Cause: This is usually a temporary condition. Action: If this error persists, reboot the system to resolve any potential conflicts. |
104045 | ERROR | failed to stop $self->{‘md’} | Cause: The mirror device could not be stopped. Action: Ensure that the device is not busy or mounted. Try running "mdadm —stop" manually to stop the device. |
104048 | WARN | failed to kill $proc, pid $pid | Cause: The process could not be signalled. This may indicate that the process has already died. Action: Ensure that the process in question is no longer running. If it is, then reboot the system to clear up the unkillable process. |
104050 | ERROR | Setting $name on $dest failed: $ret. Please try again. | Cause: The system failed to set a ‘mirrorinfo’ file setting. Action: Check the network and systems and retry the mirror setting operation. |
104052 | FATAL | Specified existing mount point "%s" is not mounted | Cause: The mount point became unmounted. Action: Ensure that the mount point is mounted and retry the operation. |
104055 | ERROR | Failed to set up temporary $type access to data for $self->{‘tag’}. Error: $ret | Cause: The filesystem or device was not available on the target server. The mirrored data will not be available on the target server until the mirror is paused and resumed again. Action: Reboot the target server to resolve any potential conflicts. |
104057 | ERROR | Failed to undo temporary access for $self->{‘tag’} on $self->{‘sys’}. Error: $ret. Please verify that $fsid is not mounted on server $self->{‘sys’}. | Cause: The filesystem could not be unmounted on the target server. Action: Ensure that the filesystem and device are not busy on the target server. Reboot the target server to resolve any potential conflicts. |
104066 | FATAL | Cannot get the hardware ID of device "%s" | Cause: A unique ID could not be found for the target disk device. Action: Ensure that the appropriate storage recovery kits are installed on the target server. Ensure that the Device ID of the target disk has not changed. |
104067 | FATAL | Asynchronous writes cannot be enabled without a bitmap file | Cause: An attempt was made to create a mirror with invalid parameters. Action: A bitmap file parameter must be specified or synchronous writes must be specified. |
104070 | FATAL | Unable to extend the mirror "%s" to system "%s" | Cause: The hierarchy extend operation failed. Action: Check the logs for related errors and try to resolve the reported problem. |
104081 | FATAL | Cannot make the %s filesystem on "%s" (%d) | Cause: The "mkfs" command failed. Action: Ensure that the disk device is writable and free of errors and that the filesystem tools for the selected filesystem are installed. |
104082 | FATAL | %s | Cause: This message contains the output of the "mkfs" command. |
104083 | FATAL | Cannot create filesys hierarchy "%s" | Cause: The resource creation failed. Action: Check the logs for related errors and try to resolve the reported problem. |
104086 | ERROR | The "%s_data_corrupt" flag is set in "%s/subsys/scsi/resources/netraid/" on system "%s". To avoid data corruption, LifeKeeper will not restore the resource. | Cause: The data corrupt flag file has been set as a precaution to prevent accidental data corruption. The mirror cannot be restored on this server until the file is removed. Action: If you are sure that the data is valid on the server in question, you can either: 1) remove the file and restore the mirror, or 2) force the mirror online using the LifeKeeper GUI or ‘mirror_action force’ command. |
104099 | ERROR | Unable to unextend the mirror for resource "%s" from system "%s" | Cause: The hierarchy unextend operation failed. Action: Reboot the target server to resolve any potential conflicts and retry the operation. |
104106 | ERROR | remote ‘bitmap -m’ command failed on $target->{‘sys’}: $ranges | Cause: The bitmap merge command failed on the target server. This may be caused by one of two things: 1) The bitmap file may be missing or corrupted, or 2) the mirror (md) device may be active on the target. Action: Make sure that the mirror and protected filesystem are not active on the target. If the target’s bitmap file is missing, pause and resume the mirror to recreate the bitmap file. |
104107 | ERROR | Asynchronous writes cannot be enabled without a bitmap file | Cause: Invalid parameters were specified for the mirror create operation. |
104108 | ERROR | Local Partition not available | Cause: Invalid parameters were specified for the mirror create operation. |
104109 | ERROR | Cannot get the hardware ID of device "%s" | Cause: A unique ID could not be determined for the disk device. Action: Ensure that the appropriate storage recovery kits are installed on the server. Ensure that the Device ID of the disk has not changed. |
104111 | FATAL | Insufficient input parameters for "%s" creation | Cause: Invalid parameters were specified for the mirror create operation. |
104112 | FATAL | Insufficient input parameters for "%s" creation | Cause: Invalid parameters were specified for the mirror create operation. |
104113 | FATAL | Insufficient input parameters for "%s" creation | Cause: Invalid parameters were specified for the mirror create operation. |
104115 | FATAL | Insufficient input parameters for "%s" creation | Cause: Invalid parameters were specified for the mirror create operation. |
104117 | FATAL | Insufficient input parameters for "%s" creation | Cause: Invalid parameters were specified for the mirror create operation. |
104118 | FATAL | Cannot unmount existing Mount Point "%s" | Cause: The mount point is busy. Action: Make sure the filesystem is not busy. Stop any processes or applications that may be accessing the filesystem. |
104119 | FATAL | Invalid data replication resource type requested ("%s" | Cause: An invalid parameter was specified for the mirror create operation. |
104124 | EMERG | WARNING: A temporary communication failure has occurred between systems %s and %s. In order to avoid data corruption, data resynchronization will not occur. MANUAL INTERVENTION IS REQUIRED. In order to initiate data resynchronization, you should take one of the following resources out of service: %s on %s or %s on %s. The resource that is taken out of service will become the mirror target. | Cause: A temporary communication failure (split-brain scenario) has occurred between the source and target servers. Action: Perform the steps listed in the message text. |
104125 | ERROR | failed to start ‘$cmd $_2 $user_args’ on ‘$_3‘ | Cause: The specified command failed. Action: Check the logs for related errors and try to resolve the reported problem. |
104126 | ERROR | $_ | Cause: This message contains the output of the command that was reported as failing in message 104125. |
104128 | FATAL | comm path/server not specified | Cause: The netraid.down script was called without specifying the communication path or the server name. This script is called internally so should always have the proper parameters. Action: Report this error to SIOS support. |
104129 | WARN | Cause: The replication connection for the mirror is down. Action: Check the network. |
|
104130 | ERROR | Mirror resize failed on %s (%s). You must successfully complete this operation before using the mirror. Please try again. | Cause: The mirror resize operation has failed to update the mirror metadata on the listed system. Action: You must successfully complete the resize before using the mirror. Re-run mirror_resize (possibly using -f to force the operation if necessary). |
104132 | ERROR | The partition "%s" has an odd number of sectors and system "%s" is running kernel >= 4.12. Mirrors with this configuration will not work correctly with DataKeeper. Please see the SIOS product documentation for information on how to resize the mirror. | Cause: The partition or disk chosen for mirror creation has an odd number of disk sectors and will have to be resized to be used with DataKeeper. Action: Resize the partition using the ‘parted’ command or resize the disk (is possible) using platform (VMware, AWS) tools. Caution: data may be lost if this is not done carefully. |
104143 | ERROR | Mirror resume was unsuccessful ($ret | Cause: The mirror could not be established. Action: Check the logs for related errors and try to resolve the reported problems. |
104144 | ERROR | Unable to stop the mirror access for $self->{‘md’} on system $self->{‘sys’}. Error: $ret. Use the \"mdadm —stop $self->{‘md’}\" command to manually stop the mirror. | Cause: The mirror device created on the target node when the mirror was paused could not be stopped. Action: Ensure that the device is not busy or mounted. Try running "mdadm —stop" manually to stop the device. |
104145 | WARN | Unable to dirty full bitmap. Setting fullsync flag. | Cause: A full resync could not be done by dirtying the full bitmap. The fullsync flag will be used instead. This is a non-fatal error as a full synchronization will still be done. Action: None |
104146 | EMERG | WARNING: The target system %s for mirror %s has the target mirror %s currently active. In order to avoid data corruption, data resynchronization will not occur. MANUAL INTERVENTION IS REQUIRED. In order to initiate data resynchronization, you should reboot system %s. | Cause: The mirror is configured on the target system. Action: The target system should be rebooted. DataKeeper should then be able to resync the mirror. |
104147 | EMERG | WARNING: The target system %s for mirror %s has the target disk %s currently mounted. In order to avoid data corruption, data resynchronization will not occur. MANUAL INTERVENTION IS REQUIRED. In order to initiate data resynchronization, you should unmount %s on %s. A full resync will occur. | Cause: The mirror disk is mounted on the target system. Action: The mirror disk should be unmounted on the target system, in order to initiate a full mirror resync. A full resync is required because untracked changes have occurred on the disk. |
104148 | EMERG | The storage configuration for mirror "%s (%s)" does not have a unique identifier and may have potential for data corruption in some environments in certain circumstances. Please refer to the SIOS Product Documentation for details on DataKeeper storage configuration options. | Cause: The disk chosen for mirroring does not provide a UUID to the operating system. DataKeeper cannot mirror a disk without a UUID. Action: You may be able to create a GPT partition table on the disk to provide a UUID for the disk partitions. |
104149 | ERROR | snmptrap returned $err for Trap 142 | Cause: The SNMP trap 142 to notify when DataKeeper has paused the mirror and mounted the target disk failed. The details of the error will be displayed following this message. |
104150 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: The SNMP trap 142 to notify when DataKeeper has paused the mirror and mounted the target disk failed. The details of the error is displayed in this message. |
104152 | ERROR | [$SUBJECT event] mail returned $err | Cause: The email to notify when DataKeeper has paused the mirror and mounted the target disk failed. The details of the error will be displayed following this message. |
104153 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: The email to notify when DataKeeper has paused the mirror and mounted the target disk failed. The details of the error will be displayed in this message. |
104155 | EMERG | The mirror %s cannot be forced online at this time. The underlying disk %s is mounted, indicating possible data corruption. MANUAL INTERVENTION IS REQUIRED. You must unmount %s on %s and restore the mirror to the last known mirror source system. A full resync will need to be performed from the source system to %s. | Cause: You have mounted the underlying mirrored disk on the target system. Action: You must unmount the disk immediately in order to avoid data corruption. |
104156 | WARN | Resynchronization of "%s" is in PENDING state. Current sync_action is: "%s" | Cause: The resynchronization of the md device is detected in PENDING state. Action: LifeKeeper will try to fix the issue by forcing a resynchronization. Check the logs for related errors. When successful assure that the PENDING state has been cleared in /proc/mdstat and the resynchronization is in progress or has been completed for the datarep resource. |
104158 | EMERG | WARNING: The local disk partition $self->{‘part’} for data replication device\n$self->{‘md’} has failed. MANUAL INTERVENTION IS REQUIRED. | Cause: The local device for a mirror failed. The recovery action has been set to "nothing" in LKDR_FAILURE requiring manual intervention to recover. Action: Check system logs and LifeKeeper logs for errors related to the local disk. |
104163 | WARN | The "%s_data_corrupt" flag is set in "%s/subsys/scsi/resources/netraid/" on system "%s". The mirror is being forced online. | Cause: The mirror is being forced online, overriding the data_corrupt flag. The data on the specified system will be treated as the latest data. If this is not correct then this can lead to data corruption or data loss. Action: None |
104164 | ERROR | The "%s_data_corrupt" flag for related mirror resource "%s" is set in "%s/subsys/scsi/resources/netraid/" on system "%s". To avoid data corruption, LifeKeeper will not restore this mirror or any related mirrors in the hierarchy. | Cause: The data_corrupt flag exists for one or more mirrors in the hierarchy. To avoid corrupting additional data none of the mirrors are brought in-service until all of the data_corrupt flags are resolved. Action: Check the LifeKeeper logs to determine where each mirror was last in-service, aka where the latest data for each mirror resides. The mirrors should be brought in-service on the "previous source" where the full hierarchy was in-service and allow the mirrors to synchronize with all targets. |
104165 | ERROR | The "%s_data_corrupt" flag for related mirror resource "%s" is set in "%s/subsys/scsi/resources/netraid/" on system "%s". The mirror resource "%s" is being forced online. | Cause: The mirror is being forced online, overriding the data_corrupt flag. The data on the specified system will be treated as the correct data to be synchronized with all targets. This can lead to data corruption or data loss if this is not the latest data. Action: None |
104171 | ERROR | Failed to create \"source\" flag file on %s to track mirror source. Target %s will not be added to mirror. | Cause: The ‘source’ flag file was not created on the specified system to track the mirror source. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104172 | ERROR | The \"source\" flag file on %s does not contain a valid target (%s). Full resync to remaining targets is required. | Cause: The ‘source’ flag file should contain the system name of a previous source but the name listed was not found in the list of systems configured. Action: Report this problem to SIOS support. |
104173 | ERROR | Failed to create \"source\" flag file on %s to track mirror source. | Cause: The ‘source’ flag file was not created on the specified system to track the mirror source Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104174 | ERROR | Failed to create \"previous_source\" flag file to track time waiting on source. Will not be able to timeout. | Cause: The ‘previous_source’ flag file was not created on the mirror source to track the mirror’s previous source. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104175 | ERROR | Failed to create "data_corrupt" flag file on target "%s". | Cause: The ‘data_corrupt’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104176 | ERROR | The \"source\" flag file on %s to track mirror source does not exist. Full resync is required. | Cause: The ‘source’ flag file should exist on the system and without it the consistency of the mirror can not be verified. A full resync is required to assure data reliability. Action: Check the LKROOT (/opt/LifeKeeper) file system for error or that it is full. |
104177 | ERROR | Failed to determine amount of time waiting on %s. | Cause: The amount of time waiting for the previous source could not be determined. Targets will be added with a full resync if the previous source is not found. Action: none |
104178 | ERROR | Failed to update "source" flag file on target "%s", previous source must be merged first. | Cause: The source flag file on the target is updated when it is in-sync and stopped so the the next in-service does not require the previous source. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104180 | ERROR | Internal Error: \"previous_source\" has the local system name (%s). | Cause: The local system name should not be in the previous_source flag file. Action: Report this error to SIOS support. |
104181 | ERROR | Internal Error: There are no targets waiting on %s to be merged. | Cause: There are no targets waiting for a previous source to merge. Action: Report this error to SIOS support. |
104182 | ERROR | Failed to create \"source\" flag file on %s to track mirror source. This may result in a full resync. | Cause: The ‘source’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104186 | ERROR | Failed to create \"last_owner\" flag file on %s to track mirror source. This may allow in-service of mirror on old data. | Cause: The ‘last_owner’ flag file was not created on the source. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104187 | WARN | $REM_MACH has ${REM_TAG}_last_owner file, create flag ${FLAGTAG}_data_corrupt. | Cause: The system listed had the mirror in-service last. Action: The system listed has the last_owner file that indicates it has the most recent data and is most likely the best system to in-service the mirror to avoid losing data. |
104188 | WARN | $REM_MACH is not alive, create flag ${FLAGTAG}_data_corrupt. | Cause: The system listed is not alive. Action: Since the system listed is not alive, it cannot be determined whether that system was a more recent mirror source than the local system. Therefore the local system should not automatically be allowed to bring the mirror in-service. |
104190 | ERROR | [$SUBJECT event] mail returned $err | Cause: The email to notify when DataKeeper resynchronization is complete failed. The details of the error will be displayed following this message. |
104191 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: The email to notify when DataKeeper resynchronization is complete failed. The details of the error is displayed in this message. |
104193 | ERROR | snmptrap returned $err for Trap 143 | Cause: SNMP trap 143 to notify when DataKeeper resynchronization is complete failed. The details of the error will be displayed following this message. |
104194 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | Cause: SNMP trap 143 to notify when DataKeeper resynchronization is complete failed. The details of the error are given in this message. |
104196 | ERROR | $msg | Cause: The kernel does not support async replication. The message will provide more details on the failure. Action: Do not configure asynchronous replication while using this kernel per the details in the message. |
104197 | WARN | $msg | Cause: The is_async_supported returned a warning message specifying some conditions to be aware when replicating asynchronously. Action: Verify that the condition of the warning does not apply to this configuration. Synchronous replication may be necessary. |
104200 | EMERG | Continue to wait for %s to merge bitmap and do partial resyncs to all targets, no timeout set. | Cause: In a multi-target configuration targets will not be configured until the previous source is available to merge its bitmap so that all targets will be able to partially resynchronize. The LKDR_WAIT_ON_PREVIOUS_SOURCE_TIMEOUT entry in /etc/defaults/LifeKeeper is set to “-1” to wait indefinitely. Action: Check on the status of the previous source listed in the message and resolve any issues that are preventing it from rejoining the cluster. |
104201 | EMERG | To stop waiting for the previous source (forcing a full resync to remaining waiting targets) run: \"%s/bin/mirror_action %s fullresync %s %s\" on %s. | Cause: In a multi-target configuration targets are not being configured, waiting on the previous source to rejoin the cluster. Action: Run the command listed in the message to force an immediate full resynchronization to this target and any remaining targets waiting to be resynchronized. |
104202 | EMERG | Continue to wait for %s to merge bitmap and do partial resyncs to all targets. Continue to wait %d more seconds. | Cause: In a multi-target configuration targets will not be configured until the previous source is available to merge its bitmap so that all targets will be able to partially resynchronize. The LKDR_WAIT_ON_PREVIOUS_SOURCE_TIMEOUT entry in /etc/defaults/LifeKeeper is set to the number of seconds to wait. If the previous source does not join the cluster in that time then targets will be added with a full resynchronization. Action: Check on the status of the previous source listed in the message and resolve any issues that are preventing it from rejoining the cluster. |
104203 | EMERG | To stop waiting for the previous source (forcing a full resync to remaining waiting targets) run: \"%s/bin/mirror_action %s fullresync %s %s\" on %s. | Cause: In a multi-target configuration targets are not being configured, waiting on the previous source to rejoin the cluster. Action: Run the command listed in the message to force an immediate full resynchronization to this target and any remaining targets waiting to be resynchronized. |
104207 | ERROR | Failed to create "data_corrupt" flag file on "%s". | Cause: The ‘data_corrupt’ flag file was not created on the source listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104208 | ERROR | Failed to create "data_corrupt" flag file on target "%s". | Cause: The ‘data_corrupt’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104209 | ERROR | Failed to create "data_corrupt" flag file on "%s". | Cause: The ‘data_corrupt’ flag file was not created on the source listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104210 | ERROR | Failed to create "data_corrupt" flag file on target "%s". | Cause: The ‘data_corrupt’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104211 | ERROR | Failed to create "data_corrupt" flag file on target "%s". | Cause: The ‘data_corrupt’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104212 | ERROR | The \"source\" flag file on %s to track mirror source does not exist. Full resync to remaining targets is required. | Cause: The ‘source’ flag file should exist on the system and without it the consistency of the mirror can not be verified. A full resync is required to assure data reliability. All targets not already being mirrored will require a full resync. Action: Check the LKROOT (/opt/LifeKeeper) file system for error or that it is full. |
104214 | ERROR | Failed to create \"source\" flag file on %s to track mirror source. | Cause: The ‘source’ flag file was not created on the specified system to track the mirror source. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104216 | ERROR | Failed to create "data_corrupt" flag file on target "%s". | Cause: The ‘data_corrupt’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104218 | ERROR | Failed to create "data_corrupt" flag file on target "%s". | Cause: The ‘data_corrupt’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full |
104221 | ERROR | Failed to create \"last_owner\" flag file on %s to track mirror source. This may allow in-service of mirror on old data. | Cause: The ‘last_owner’ flag file was not created on the target listed. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104222 | ERROR | Failed to create "last_owner" flag file to track mirror source". This may allow in-service of mirror on old data. | Cause: The ‘last_owner’’ flag file is used to know where the mirror was last in-service. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104223 | ERROR | Failed to create "last_owner" flag file to track mirror source". This may allow in-service of mirror on old data. | Cause: The ‘last_owner’’ flag file is used to know where the mirror was last in-service. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104224 | ERROR | Failed to create \"previous_source\" flag file. | Cause: The ‘previous_source’ flag file was not created. This is needed to merge the previous source bitmap to avoid a full resync. Action: Check the LKROOT (/opt/LifeKeeper) file system for errors or that it is full. |
104227 | ERROR | Failed to set %s to %s. | Cause: This message indicates a failure to set a sysfs parameter for the nbd driver (/sys/block/nbdX). Action: It may be necessary to adjust one or more of: NBD_NR_REQUESTS in /etc/default/LifeKeeper to avoid this error. |
104232 | ERROR | Mirror resize failed on %s (%s). Could not set size to %d. | Cause: The mirror resize operation failed. |
104233 | ERROR | Mirror resize failed on %s (%s). Could not set bitmap to %s and bitmap-chunk to %d. | Cause: The mirror resize operation failed. |
104234 | ERROR | The mirror %s failed to resize. You must successfully complete this operation before using the mirror. Please try again. | Cause: The mirror resize operation failed. |
104235 | ERROR | mirror_resize of mirror %s failed due to signal "%s". | Cause: The mirror resize operation failed. |
104236 | EMERG | Resource "%s" is "OSF". The mirror "%s" will wait to replicate data until all resources in the hierarchy%s are in-service. This may indicate inconsistent data. Verify the data is correct before replicating data; replication will continue when all resources in the hierarchy%s are in-service. A full resync may be necessary (see "LKDR_WAIT_TO_RESYNC" in /etc/default/LifeKeeper). | Cause: The specified resource is OSF. This prevents replication until this resource is repaired. The LKDR_WAIT_TO_RESYNC setting in /etc/default/LifeKeeper determines what resources must be in-service before replication is allowed. Action: Determine the cause of the corruption and repair. This may involve bringing the resource in-service on another node. A full resync is most likely required once the data is repaired. |
104237 | WARN | Resource "%s" is "OSU". The mirror "%s" will wait to replicate data until all resources in the hierarchy%s are in-service. To replicate data immediately run: "%s/bin/mirror_action %s resume" on "%s" (see "LKDR_WAIT_TO_RESYNC" in /etc/default/LifeKeeper). | Cause: The specified resource is not in-service and is required before replication is resumed during a restore operation. Action: Bring the required resources in-service to resume replication. Replication can also resume using the GUI command to resume or using the mirror_action command. |
104238 | ERROR | Unable to read $nbd_taint_file. Assuming that the SIOS ‘nbd’ kernel module is not loaded. | Cause: The /sys/module/nbd/taint file could not be opened for reading. Action: Verify that the /sys/module/nbd/taint file exists and is read-enabled. |
104239 | EMERG | $failure_msg | Cause: At least one kernel module required by SIOS DataKeeper failed to load. Action: Verify that the current running kernel is supported by this version of SIOS Protection Suite for Linux. If the kernel was recently updated, re-run the SIOS Protection Suite for Linux setup script located on the SIOS installation media to install kernel modules that are compatible with the current running kernel. |
104242 | ERROR | Unable to read /proc/modules. Assuming that the SIOS ‘$module’ kernel module is not loaded. | Cause: Unable to read /proc/modules. Action: Verify that the /proc/modules file exists and is read-enabled. |
104243 | ERROR | Internal script or routine $caller was called for unsupported kernel module ‘$module’. Supported kernel modules for this script or routine are: $module_list. | Cause: The given script or routine was called for an unsupported kernel module. Action: This is an internal error. Please contact SIOS Customer Support. |
104244 | ERROR | Output from ‘modprobe $module’ command: $pretty_modprobe_out | Cause: Failed to load the given kernel module using the modprobe command. Action: Inspect the output provided in the log message from the failed modprobe attempt and resolve any issues found there. |
104251 | ERROR | There is no LifeKeeper protected resource with tag $tag on system $me. | Cause: The given tag does not correspond to a LifeKeeper protected resource on the given system. Action: Verify that the resource tag and system name are correct. |
104252 | ERROR | Resource $tag is not a $app/$typ resource. Please use the $ins_app/$ins_typ resource-specific canfailover script instead. | Cause: The scsi/netraid-specific canfailover script was called for a non-scsi/netraid resource. Action: Use the canfailover script, if it exists, corresponding to the appropriate app and type of the given resource. |
107015 | ERROR | Exported file system $opt_t cannot be accessed on $me. | |
112000 | FATAL | Usage: "%s %s". Specify the correct usage for the requested command. | Cause: Invalid parameters passed to the SAP create script. Action: Please provide appropriate parameters for the SAP create script. |
112004 | ERROR | Neither the SID/instance pair nor the tag parameter were specified for the internal "%s" routine on %s. If this was a command line operation, specify the correct parameters. Otherwise, consult the troubleshooting documentation. | Cause: Invalid parameters were specified when trying to create an internal SAP object. Action: Provide either the SID and instance or the LifeKeeper resource tag name. |
112013 | ERROR | Unable to open file "%s" on %s. Verify that the specified file exists and/or is readable. | Cause: The sapservices file either does not exist or is not readable. Action: Verify that the sapservices file exists and is readable. |
112017 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112019 | ERROR | The attempt to update the resource information field for resource %s has failed on %s. View the resource properties manually using "ins_list -t <tag>" to verify that the resource is functional. | Cause: Unable to update the LifeKeeper resource information field for the given resource on the given server. Action: Verify that the resource exists and that LifeKeeper is running on the given server. |
112022 | ERROR | An error occurred while trying to find the IP address corresponding to "%s" on %s. Verify the IP address or host name exists in DNS or the hosts file. | Cause: Unable to find the given IP address or host name on the given server. Action: Verify that the IP address or DNS name exists in DNS or in the local hosts file. |
112024 | FATAL | There was an error verifying the NFS connections for SAP related mount points on $me. One or more NFS servers is not operational and needs to be restarted. | Cause: At least one NFS shared file system listed in the SAP_NFS_CHECK_DIRS parameter is currently unavailable. Action: Verify that the NFS server is alive, all necessary NFS-related services are running, and that all necessary file systems are being exported. |
112028 | ERROR | Unable to determine the user name for the SAP administrative user for resource %s on %s. | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112029 | ERROR | The canextend script "%s" either does not exist or is not executable on %s. | Cause: The SAP canextend script either does not exist or is not executable. Action: Verify that the SAP canextend script exists and is executable. |
112037 | ERROR | Unable to create an internal object for the SAP instance using SID %s, instance %s, and tag %s on server %s. Verify that all necessary SAP file systems are mounted and accessible before reattempting the operation. | Cause: Unable to create an internal SAP object to represent the given SAP instance. Action: Verify that the SAP instance is properly installed and configured and that all necessary file systems are mounted. |
112040 | ERROR | The SAP Directory "%s" ("%s") does not exist on %s. Verify that the directory exists and that the SAP software is properly installed. | Cause: The given SAP installation directory does not exist on the given server. Action: Verify that the SAP software is properly installed and that all necessary file systems are mounted. |
112041 | ERROR | The required utility "%s" was not found or was not executable on %s. Verify the SAP installation and location of the required utility. | Cause: The saphostexec or saposcol utility either could not be found or is not executable. Action: Verify that the SAP Host Agent package is installed correctly and that all necessary file systems are mounted. |
112042 | ERROR | One or more SAP or LifeKeeper validation checks has failed on %s. Please update the SAP software on this host to include the SAPHOST and SAPCONTROL packages. | Cause: The saphostexec or saposcol utility either could not be found or is not executable. Action: Verify that the SAP Host Agent package is installed correctly and that all necessary file systems are mounted. |
112048 | ERROR | The SAP instance %s is already under LifeKeeper protection on server %s. Choose another SAP instance to protect or specify the correct instance. | Cause: The given SAP instance is already protected by LifeKeeper on the given server. Action: Choose an SAP instance which is not already under LifeKeeper protection. |
112049 | ERROR | Unable to locate the SAP Mount directory on %s. Verify that all SAP file systems are mounted and accessible before reattempting the operation. | Cause: Unable to determine the location of the SAP Mount (sapmnt) directory. Action: Verify that all necessary file systems are mounted and that the all necessary SAP instance profiles are accessible. |
112050 | ERROR | Detected multiple virtual IP addresses/host names for instance %s on %s. Verify that the instance is configured correctly. | Cause: Multiple virtual IPs or hostnames were detected for the given SAP instance on the given server. Action: Verify that the virtual IP or hostname associated to the instance is configured correctly. |
112051 | ERROR | The host name associated to instance %s under SID %s (%s) is a full or partial match to the local physical host name (%s). For a highly-available cluster deployment, this must be set to a virtual host name (see SAP Note 962955). The host name associated to the instance may be found either as the "%s" value in the instance profile (%s) or as the value of "%s" or "%s" in the default profile (%s). The parameter \‘SAP_IGNORE_HOSTNAME_CHECK=1\’ may be set in /etc/default/LifeKeeper to override this check. | Cause: The given host name parameter is set to a physical server host name on the given server. Action: Set the given host name parameter to a virtual host name. |
112053 | ERROR | Detected multiple instances under SID %s with the same instance number (%s) on %s. Each instance within a particular SID must have a unique instance number. | Cause: Multiple SAP instances with the same instance number were detected under the same SAP SID. Action: Reconfigure the SAP environment so that each instance under a given SAP SID has a unique instance number. |
112056 | ERROR | The NFS export for the path "%s" required by the instance %s for the "%s" directory does not have an NFS hierarchy protecting it on %s. You must create an NFS hierarchy to protect this NFS export before creating the SAP resource hierarchy. | Cause: The NFS export for the given file system is not currently protected by LifeKeeper. Action: Create a LifeKeeper NFS hierarchy for the given exported file system and reattempt SAP resource creation. |
112057 | ERROR | Unable to create a file system resource hierarchy for the file system "%s" on %s. | Cause: Unable to create a LifeKeeper file system resource hierarchy to protect the given file system on the given server. Action: Check the LifeKeeper and system logs for more information. |
112058 | ERROR | Unable to create a dependency between parent tag "%s" and child tag "%s" on "%s". | Cause: Unable to create a LifeKeeper dependency between the given resources on the given server. Action: Check the LifeKeeper logs for more information. |
112060 | ERROR | All attempts at local recovery for the SAP resource %s have failed on %s. A failover to the backup server will be attempted. | Cause: Unable to recover the given SAP resource on the given server. Action: A failover of the SAP resource hierarchy will be attempted automatically. No user intervention is required. |
112061 | ERROR | The values specified for the target and the template servers are the same. Please specify the correct values for the target and template servers. | Cause: The template and target servers provided during SAP resource extension are the same. Action: Provide the correct names for the template and target servers and reattempt the extend operation. |
112062 | ERROR | Unable to find the home directory "%s" for the SAP administrative user "%s" on %s. Verify that the SAP software is installed correctly. | Cause: Unable to find the home directory for the given SAP user on the given server. Action: Verify that the SAP software is installed correctly and that the appropriate SAP administrative user for the given SID exists on the server. |
112063 | ERROR | The SAP administrative user "%s" does not exist on %s. Verify that the SAP software is installed correctly or create the required SAP user on %s. | Cause: The given SAP administrative user does not exist on the given server. Action: Verify that the SAP software is properly installed and create the required SAP administrative user if necessary. |
112064 | ERROR | The group ID for user "%s" is not the same on template server "%s" and target server "%s". Please correct the group ID for the user so that it is the same on the template and target servers. | Cause: The group ID’s for the given SAP administrative user on the template and target servers do not match. Action: Modify the group ID of the given SAP administrative user so that it is the same on the template and target servers. |
112065 | ERROR | The user ID for user "%s" is not the same on template server "%s" and target server "%s". Please correct the user ID so that it is the same on the template and target servers. | Cause: The user ID’s for the given SAP administrative user on the template and target servers do not match. Action: Modify the user ID of the given SAP administrative user so that it is the same on the template and target servers. |
112066 | ERROR | Required SAP utilities could not be found in "%s" on %s. Verify that the SAP software is installed correctly. | Cause: Unable to locate necessary SAP executables or the SAP instance profile. Action: Verify that the SAP software is properly installed and configured and that all necessary file systems are mounted. |
112069 | ERROR | The command "%s" is not found in the "%s" perl module ("%s") on %s. Please check the command specified and retry the operation. | Cause: The given command was not found in the sap perl module on the given server. Action: If this error resulted from a user-initiated command line action, verify that the correct routine name was provided to the remoteControl script. If this error occurred during normal LifeKeeper operation, please submit an issue report to SIOS customer support. |
112071 | ERROR | The file "%s" exists, but was not read and write enabled on server %s. Enable read and write permissions on the specified file. | Cause: The given file does not have read/write permissions enabled on the given server. Action: Enable read/write permissions on the given file. |
112073 | ERROR | Unable to create an internal object for the SAP instance using either SID "%s" and instance "%s" or tag "%s" on server "%s". Either the values specified for the object initialization (SID/instance pair or tag, system) were not valid, or an error occurred while attempting to gather information about the SAP instance. If all specified parameters are correct, verify that all necessary SAP file systems are mounted and accessible before reattempting the operation. | Cause: Unable to create an internal SAP object to represent the given SAP instance. Action: Verify that the SAP instance is properly installed and configured and that all necessary file systems are mounted. |
112074 | WARN | WARNING: The profile "%s" for SID %s and instance %s has Autostart enabled on %s. Disable Autostart for the specified instance by setting Autostart=0 in the profile. | Cause: The Autostart parameter is enabled in the given instance profile on the given server. Action: Disable Autostart for the given SAP instance by setting ‘Autostart = 0’ in the instance profile. |
112076 | FATAL | Unable to start the sapstartsrv service for SID $sid and instance $Instance on $me. Verify that the sapservices file is correct and that the process can be started manually. | Cause: Unable to start the SAP Start Service (sapstartsrv) process for the given SAP instance. Action: Verify that the sapservices file contains the appropriate command to start the sapstartsrv process and that the process can be started manually. |
112077 | ERROR | Unable to stop the sapstartsrv service for SAP SID %s and SAP instance %s on %s. Verify that the sapservices file is correct and the process can be stopped manually. | Cause: Unable to stop the SAP Start Service (sapstartsrv) process for the given SAP instance. Action: Verify that the sapservices file contains the appropriate command to start the sapstartsrv process and that the process can be stopped manually. |
112078 | ERROR | ERSv1 is only supported in two-node clusters. Resource %s is unable to be extended to system %s. Upgrade to ERSv2 in order to extend the hierarchy to three or more nodes. | Cause: Unable to extend an SAP resource representing an ERSv1 instance to three or more nodes. Action: In order to extend an SAP resource representing an ERS instance to three or more nodes, upgrade to ERSv2. Upgrade instructions are provided in the online product documentation. |
112082 | WARN | Instance %s is running a different version of the enqueue server than its corresponding enqueue replication server. This configuration is not supported by SAP and will lead to unexpected resource behavior. See SAP Note 2711036 – "Usage of the Standalone Enqueue Server 2 in an HA Environment" for more details. Please review the online product documentation for instructions on how to modify the instance profiles for the enqueue server and enqueue replication server so that they use the same version. | Cause: The versions of the enqueue server and enqueue replication server do not match. Action: Consult the online product documentation for instructions on how to modify the instance profiles so that the enqueue server and enqueue replication server are using the same version. |
112086 | ERROR | The ERS resource corresponding to resource %s is in-service and maintaining backup locks on a remote system. Bringing resource %s in-service on %s would result in a loss of the backup lock table. Please bring resource %s in-service on the system where the corresponding ERS resource is currently in-service in order to maintain consistency of the lock table. In order to force resource %s in-service on %s, either (i) run the command \’/opt/LifeKeeper/bin/flg_create -f sap_cs_force_restore_%s\’ as root on %s and reattempt the in-service operation or (ii) take the corresponding ERS resource out of service on the remote system. Both of these actions will result in a loss of the backup lock table. | Cause: An in-service operation was attempted for an ASCS/SCS resource while its corresponding ERS instance was running and storing a backup enqueue table on a different server in the cluster. Action: Bring the ASCS/SCS resource in-service on the server where its corresponding ERS instance is running in order for it to retrieve the backup enqueue table. If the ASCS/SCS resource must be forced in-service on the given node, either (i) run the command ‘/opt/LifeKeeper/bin/flg_create -f sap_cs_force_restore_<ASCS/SCS Tag>’ and reattempt the in-service operation, or (ii) take the corresponding ERS resource out of service on the remote server. Both of these actions will result in a loss of the backup enqueue table. |
112089 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance types TYPE_CS, TYPE_ERS, or TYPE_NEW_ERS (1, 2, or 5). | Cause: The GetEnqVersion routine was called for an unsupported SAP instance type. Only instance types 1 (TYPE_CS), 2 (TYPE_ERS), and 5 (TYPE_NEW_ERS) are supported. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112092 | ERROR | The profile "%s" either does not exist or cannot be read on %s. Unable to determine whether enqueue replication is enabled for resource %s. Please verify that the file exists and can be read. | Cause: The given instance profile either does not exist or cannot be read on the given server. Action: Verify that the file exists and is read-enabled |
112095 | ERROR | Error creating resource "%s" on server "%s" | Cause: The given resource could not be created on the given server. Action: Verify that the SAP software is properly installed and that all necessary file systems are mounted and accessible on the given server. |
112096 | ERROR | Resource %s is not currently in-service on server %s. Manually bring the resource in-service and retry the operation. | Cause: While attempting to create a dependency, the given resource was not in-service on the given server. Action: Bring the resource in-service on the given server and retry the operation. |
112101 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112102 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: The given resource cannot be extended to the given server. Action: Verify that the SAP software is properly installed on the target server and that any necessary shared file systems are accessible from the target system. |
112103 | ERROR | Error creating resource "%s" on server "%s" | Cause: The given resource could not be created on the given server. Action: Verify that the SAP software is properly installed and that all necessary file systems are mounted and accessible on the given server. |
112104 | ERROR | The extend script "%s" either does not exist or is not executable on %s. | Cause: The given extend script does not exist or is not executable on the given server. Action: Verify that all necessary recovery kits are installed and that the given extend script is executable. |
112106 | ERROR | Unable to create an internal SAP object for resource "%s" on %s. If the tag is correct, verify that all necessary SAP file systems are mounted and accessible. | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112112 | ERROR | Error getting resource information for resource "%s" on server "%s". | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112115 | FATAL | Error getting resource information for resource $tag on server $sap::me. | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112119 | ERROR | Error getting resource information for resource "%s" on server "%s". | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112123 | ERROR | Error getting resource information for resource "%s" on server "%s". | Cause: Unable to find information about the given resource on the given server. Action: Verify that the given resource exists and that all necessary file systems are mounted on the given server. |
112124 | ERROR | Resource with either matching tag "%s" or id "%s" already exists on server %s for app "%s" and type "%s". | Cause: A resource with the given resource tag or ID and the same app and type already exists on the given server. Action: Verify that the SAP instance is not already under LifeKeeper protection on the given server. If it is not already protected, choose a different resource tag name. |
112125 | ERROR | Unable to create an SAP object for resource %s on system %s. | Cause: Unable to create an internal SAP object to represent the given resource on the given server. Action: Verify that the SAP software is properly installed and that all necessary file systems are mounted and accessible on the given server. |
112126 | FATAL | Usage: "%s %s". Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP canextend script. Action: Specify the correct parameters for the canextend script: canextend <template server> <template tag> |
112127 | FATAL | Usage: \"$cmd $usage\" Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP delete script. Action: Specify the correct parameters for the delete script: delete [-U] -t <tag> -i <id> |
112128 | FATAL | Usage: "%s %s" Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP depstoextend script. Action: Specify the correct parameters for the depstoextend script: depstoextend <template server> <template tag> |
112129 | FATAL | Usage: "%s %s". Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP extend script. Action: Specify the correct parameters for the extend script: extend <template server> <template tag> <switchback> <target tag> |
112130 | FATAL | Usage: \"$cmd $usage\". Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP quickCheck script. Action: Specify the correct parameters for the quickCheck script: quickCheck -t <tag> -i <id> |
112131 | FATAL | Usage: "%s %s" Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP recover script. Action: Specify the correct parameters for the recover script: recover -d <tag> |
112132 | FATAL | Usage: \"$cmd $usage\". Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP remoteControl script. Action: Specify the correct parameters for the remoteControl script: remoteControl <tag> <remote instance> <remote cmd> <remote cmd option> <primary system> <primary tag> |
112133 | FATAL | Usage: "%s %s". Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP remove script. Action: Specify the correct parameters for the remove script: remove -t <tag> -i <id> |
112134 | FATAL | Usage: \"$cmd $usage\". Specify the correct usage for the requested command. | Cause: Incorrect usage of the SAP restore script. Action: Specify the correct parameters for the restore script: restore -t <tag> -i <id> |
112137 | ERROR | The required parameter \"parent\" was either not provided or was invalid in the $func routine on $me. | Cause: Incorrect usage of the CleanUp routine. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112138 | ERROR | At least one required process for instance %s was not started successfully during "%s" on server %s. Please check the LifeKeeper and system logs for additional information. | Cause: At least one required process for the given SAP instance did not start successfully on the given server. Action: Correct any issues found in the LifeKeeper or system logs or SAP trace files and retry the operation. |
112140 | FATAL | The tag parameter was not specified for the internal \"$func\" routine on $me. If this was a command line operation, specify the correct parameters. Otherwise, consult the troubleshooting documentation. | Cause: The tag parameter was not specified in the GetLK routine on the given server. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112141 | ERROR | Either the SID ("%s") or instance ("%s") parameter was not specified for the "%s" routine on %s. | Cause: Either the SID or instance parameter was not specified in the StatusSapServer routine on the given server. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112142 | ERROR | Either the SID ("%s"), instance ("%s"), or instance number ("%s") parameter was not specified for the "%s" routine on %s. | Cause: Either the SID, instance, or instance number parameter was not provided to the StartSapServer routine on the given server. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112143 | ERROR | The SID, instance, or instance number parameter was not specified for the "%s" routine on %s. | Cause: Either the SID, instance, or instance number parameter was not specified in the StopSapServer routine on the given server. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112173 | ERROR | The file "%s" does not exist or was not readable on %s. Verify that the specified file exists and is readable. | Cause: The given file does not exist or is not readable on the given server. Action: Verify that the file exists and/or modify its permissions so that it is readable. |
112174 | ERROR | The file "%s" does not exist or was not readable on %s. Verify that the specified file exists and is readable. | Cause: The given file does not exist or is not readable on the given server. Action: Verify that the file exists and/or modify its permissions so that it is readable. |
112175 | ERROR | The file "%s" does not exist or was not readable on %s. Verify that the specified file exists and is readable. | Cause: The given file does not exist or is not readable on the given server. Action: Verify that the file exists and/or modify its permissions so that it is readable. |
112194 | ERROR | There was an error verifying the NFS connections for SAP related mount points on %s. One or more NFS servers is not operational and needs to be restarted. | Cause: At least one critical NFS shared file system whose mount point is listed in the SAP_NFS_CHECK_DIRS entry in /etc/default/LifeKeeper is currently unavailable. Action: Verify that all necessary NFS shared file systems are accessible and restart any NFS server which is not currently operational. |
112195 | FATAL | There was an error verifying the NFS connections for SAP related mount points on $me. One or more NFS servers is not operational and needs to be restarted. | Cause: At least one critical NFS shared file system whose mount point is listed in the SAP_NFS_CHECK_DIRS entry in /etc/default/LifeKeeper is currently unavailable. Action: Verify that all necessary NFS shared file systems are accessible and restart any NFS server which is not currently operational. |
112196 | WARN | There was an error verifying the NFS connections for SAP related mount points on %s. One or more NFS servers is not operational and needs to be restarted. | Cause: At least one critical NFS shared file system whose mount point is listed in the SAP_NFS_CHECK_DIRS entry in /etc/default/LifeKeeper is currently unavailable. Action: Verify that all necessary NFS shared file systems are accessible and restart any NFS server which is not currently operational. |
112201 | ERROR | The internal object value "%s" was empty. Unable to complete "%s" on %s. | Cause: No resource tag argument was provided to the GetLKEquiv routine. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112203 | ERROR | The internal object value "%s" was empty. Unable to complete "%s" on %s. Additional information available in the LifeKeeper and system logs. | Cause: The SAP instance number was not provided to the IsInstanceRunning routine. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112204 | ERROR | The internal object value "%s" was empty. Unable to complete "%s" on %s. | Cause: Unable to determine either the appropriate saphostexec or saposcol command to use. Action: If using the SAP_SRVHOST_CMD, SAP_HOSTCTL_CMD, or SAP_OSCOL_CMD LifeKeeper tunable values to provide the appropriate commands for a version of SAP NetWeaver prior to SAP kernel 7.3, ensure that these tunable values are set appropriately. |
112205 | ERROR | The internal object value "%s" was empty. Unable to complete "%s" on %s. | Cause: The SAP instance was not provided to the SAPRemExec routine on the given system. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112206 | ERROR | The required action parameter was not provided. Unable to complete "%s" on %s. | Cause: No action was provided to the SAPRemExec routine on the given system. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112208 | ERROR | Error getting the value of "%s" or "%s" from the default profile "%s" on %s. Verify that the specified value exists. | Cause: Unable to obtain the virtual IP or host name from the given profile on the given server. Action: Verify that the appropriate entry exists in the profile. |
112209 | FATAL | Unable to gather required information from the SAP default profile for SID $sid ($DPFL) on $me. Verify that the default profile exists and is accessible. | Cause: Unable to obtain information about the SAP instance from the given profile on the given server. Action: Verify that the SAP software is properly installed and that the given profile exists and is read-enabled. |
112214 | ERROR | Unable to determine the status of the path "%s" ("%s") on %s. The path on %s may require the execution of the command: "mount <ip>:<export> %s". Verify that the SAP software is correctly installed and that all SAP file systems are mounted and accessible. | Cause: The status of the file system on the given path could not be determined. Action: Verify that the SAP software is properly installed and that all necessary file systems are mounted. |
112219 | ERROR | [HACONNECTOR:%s] Unable to write to file "%s" on %s. If the file already exists, manually enable write permissions on it. | Cause: The given file does not have read/write permissions enabled on the given server. Action: Enable read/write permissions on the given file. |
112220 | ERROR | Unable to start the sapstartsrv service for SID %s and SAP instance %s on %s. Verify that the sapservices file is correct and the process can be started manually. | Cause: Unable to start the SAP Start Service (sapstartsrv) process for the given SAP instance. Action: Verify that the sapservices file contains the appropriate command to start the sapstartsrv process and that the process can be started manually. |
112221 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance types TYPE_CS, TYPE_ERS, or TYPE_NEW_ERS (1, 2, or 5). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112222 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance types TYPE_CS, TYPE_ERS, or TYPE_NEW_ERS (1, 2, or 5). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112223 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance type TYPE_CS (1). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112224 | ERROR | The internal \"%s\" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance type TYPE_CS (1). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112225 | ERROR | The internal \"%s\" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance type TYPE_NEW_ERS (5). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112226 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance types TYPE_CS, TYPE_ERS, and TYPE_NEW_ERS (1, 2, and 5). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112227 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This routine only supports SAP instance type TYPE_CS (1). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112228 | ERROR | The internal "%s" routine was called for a resource with unsupported instance type %s. This method supports only SAP instance type TYPE_NEW_ERS (5). | Cause: The given routine was called for an internal SAP object with an unsupported instance type. Action: This is an internal error. Please submit an issue report to SIOS customer support. |
112229 | ERROR | The profile \"%s\" either does not exist or cannot be read on %s. Unable to determine whether enqueue replication is enabled for resource %s. Please verify that the file exists and can be read. | Cause: The given instance profile either does not exist or cannot be read on the given server. Action: Verify that the file exists and is read-enabled |
112325 | ERROR | The $SAP_CONTROL utility cannot be located. Verify that all necessary file systems are mounted and that the $SAP_CONTROL utility can be located in the $sapadmin user’s PATH. | Cause: Unable to locate the sapcontrol utility required for SAP instance administration. Action: Verify that all necessary file systems are mounted at that the sapcontrol utility can be located in the SAP administrative user’s PATH. |
112326 | ERROR | The \"which $SAP_CONTROL\" command for user $sapadmin returned $sapcmd as the location of the $SAP_CONTROL utility, but the utility could not be found or was not executable in this location. Verify that all necessary file systems are mounted and that the $SAP_CONTROL utility can be located in the $sapadmin user’s PATH. | Cause: Unable to locate the sapcontrol utility required for SAP instance administration. Action: Verify that all necessary file systems are mounted at that the sapcontrol utility can be located in the SAP administrative user’s PATH. |
112412 | ERROR | Invalid mount entry detected (‘$entry’). Verify that all mount entries in the following files on server $me have the correct format: $filesToCheckList. | Cause: The given mount entry has an invalid format. Action: Inspect the files shown in the error message and verify that all mount entries are formatted correctly (using fstab-style entries). |
112416 | ERROR | No parent directory provided. Unable to obtain subdirectories. | Cause: The expected SAP directory layout for a SID does not exist. Action: Verify that the SAP directory structure is properly setup and that all links are correct. |
112417 | ERROR | Unable to open directory ‘$ParentDir’: $! | Cause: LifeKeeper is unable to open the directory specified. The error is provided in the message. Action: Check the directory listed to verify the permissions and owner is correct. |
112433 | ERROR | Unsupported SAPENQ_VERSION ($enqversion) for resource $tag on $me. Unable to obtain enqueue replication status. | Cause: An unsupported value was detected for the SAPENQ_VERSION parameter for the given resource on the given server. Action: Verify that SAPENQ_VERSION is set to a valid value (1 or 2, representing the version of the enqueue server currently in use) in the info file for the given resource. |
112437 | ERROR | Profile \"$srvpf\" not found on $me. Unable to obtain enqueue replication status for instance $inst. | Cause: The given instance profile either does not exist or cannot be read on the given server. Action: Verify that the file exists and is read-enabled. |
112438 | ERROR | Unsupported SAPENQ_VERSION ($enqversion) for resource $tag on $me. Unable to obtain enqueue replication status for instance $inst. | Cause: An unsupported value was detected for the SAPENQ_VERSION parameter for the given resource on the given server. Action: Verify that SAPENQ_VERSION is set to a valid value (1 or 2, representing the version of the enqueue server currently in use) in the info file for the given resource. |
112470 | ERROR | The instance profile %s could not be found on %s. Verify that the SAP software is installed correctly and that all necessary file systems are mounted and accessible. | Cause: The given instance profile either does not exist or cannot be read on the given server. Action: Verify that the file exists and is read-enabled. |
112490 | ERROR | [HACONNECTOR] Unable to determine the corresponding tag for resource with ID \"$res\" on $me. | Cause: Unable to find a LifeKeeper SAP resource with the given resource ID on the given server. Action: Verify that the resource ID provided as the —res argument of the fra command corresponds to a valid LifeKeeper SAP resource. |
112507 | ERROR | [HACONNECTOR] At least one required process for the instance was not killed successfully during the fra migrate action on $me. Aborting resource migration. | Cause: The SAP instance was not successfully stopped on the given server while attempting a fra migrate action. Action: Manually kill any processes still running for the SAP instance and reattempt the migrate action. |
112517 | ERROR | Unable to open file ‘$critMountFile’ for writing on $me. If the file already exists, verify that it is read and write-enabled. | Cause: The given critical_nfs_mounts file cannot be opened for writing. Action: Verify that the file is read and write-enabled. |
112539 | ERROR | [HACONNECTOR] Unable to find rpm information for one or more packages on $me. | Cause: The HA Connector gvi ("Get Version Information") routine was unable to determine the current version number of LifeKeeper and/or the SAP Recovery Kit. Action: Verify that the LifeKeeper Core and SAP Recovery Kit rpm information can be obtained with the rpm -q command. |
112573 | ERROR | Failed to create or write to \"$file\" on server $me. Verify that the filesystem is not full and that the file is write-enabled. | Cause: Failed to create or write to the given file on the given server. Action: Verify that the filesystem is not full and that if the file exists then it is write-enabled. |
112575 | ERROR | Failed to delete file \"$file\" on server $me. Please delete the file manually. | Cause: Failed to delete the given file on the given server. Action: Delete the file manually. |
112577 | ERROR | Failed to remove empty directory \"$dir\" on server $me. Please remove the directory manually. | Cause: Failed to remove the given directory on the given server. Action: Remove the directory manually. |
112581 | ERROR | Failed to execute routine $routine with arguments \"$arg_list\" on server $sys. | Cause: Failed to execute the given SAP perl module routine with the given arguments on the given server. Action: Verify that LifeKeeper is running and fully initialized on the given server. Inspect the LifeKeeper log for additional information. |
112582 | WARN | Detected unsupported NFS version \"$foundNFSVers\" for mount point $mountLoc from shared file system $mountDev. The default NFS version specified by the pingnfs utility will be used instead. If necessary, this value can be overridden by setting the NFS_VERSION parameter in /etc/default/LifeKeeper. | Cause: An unsupported NFS version was detected. Action: Update to a supported protocol or if the unsupported protocol must be used then override with the defaults file entry in the message. |
112583 | WARN | Detected unsupported NFS transport protocol \"$foundProto\" for mount point $mountLoc from shared file system $mountDev. The default NFS protocol specified by the pingnfs utility will be used instead. If necessary, this value can be overridden by setting the NFS_RPC_PROTOCOL parameter in /etc/default/LifeKeeper. | Cause: An unsupported NFS transport protocol was detected. Action: Update to a supported protocol or if the unsupported protocol must be used then override with the defaults file entry in the message. |
112590 | ERROR | At least one required parameter was not provided while attempting to update an SAP resource property (tag=’$tag’, property=’$property’, value=’$value’ | Cause: All parameters were not provided. Action: Retry the operation with the correct parameters. |
112591 | ERROR | The specified SAP resource property ‘$property’ is either invalid or cannot be modified. | Cause: The specified property can not be modified. Action: None. |
112592 | ERROR | Invalid $property_name ‘$value’. The $property_name must be one of the following: $valid_value_str | Cause: An incorrect SAP property value was specified to be updated. Action: Specify the correct property value from the provided list. |
112593 | ERROR | Resource ‘$tag’ not found on server ‘$me’. | Cause: The specified resource tag was not found on the server. Action: Specify the correct SAP tag. |
112594 | ERROR | Resource ‘$tag’ is has resource type $app/$res, expected $appType/$resType. | Cause: Unexpected application or resource type when updating the SAP property value. This may be due to specifying the wrong tag name. Action: Check that the specified resource tag is a SAP resource. |
112595 | ERROR | Server $sys is not alive. All servers hosting an equivalent resource to ‘$tag’ must be alive in order to update the $property_name for the resource. | Cause: LifeKeeper is not fully available on all servers in the cluster where the resource is extended. Action: Start LifeKeeper on all servers in the cluster. |
112597 | ERROR | Failed to set the $property_name for resource ‘$eqv_tag’ to ‘$value’ on server ‘$eqv_sys’. | Cause: The property was not updated on the server specified. Action: Check the logs on the server to determine the failure. |
112599 | ERROR | Failed to set the $property_name for resource ‘$tag’ to ‘$value’ on at least one server in the cluster. | Cause: The property was not updated. Additional messages will indicate which server was not updated. Action: Check the log on the server that failed. |
112976 | ERROR | There is no LifeKeeper protected resource with tag $tag on system $me. | Cause: The resource tag provided to the SAP canfailover script does not correspond to any existing LifeKeeper resource. Action: Verify that the resource tag name is correct and execute the command again. |
112977 | ERROR | Resource $tag is not a $app/$typ resource. Please use the $ins_app/$ins_typ resource-specific canfailover script instead. | Cause: The resource provided to the SAP canfailover script is not an appsuite/sap resource. Action: Use the appropriate type-specific canfailover script for the given resource. |
122005 | ERROR | Unable to "%s" on "%s" | Cause: There was an unexpected error running "getlocks". Action: Check the logs for related errors and try to resolve the reported problem. |
122007 | ERROR | Unable to "%s" on "%s" | Cause: There was an unexpected error running "rlslocks". Action: Check the logs for related errors and try to resolve the reported problem. |
122009 | ERROR | The path %s is not a valid file. | Cause: There is no listener.ora file. Action: Ensure the file exists and retry the operation. |
122010 | ERROR | The listener user does not exist on the server %s. | Cause: "Stat" command could not get user id. Action: Retry the operation. |
122011 | ERROR | The listener user does not exist on the server %s. | Cause: UID is not in passwd file. Action: Ensure the UID exists in passwd file and retry the operation. |
122012 | ERROR | The listener user does not exist on the server %s. | Cause: User name is not in passwd file. Action: Ensure the user name exists in passwd file; retry the operation. |
122023 | ERROR | The %s command failed (%d | Cause: This message contains the return code of the "lsnrctl" command. Action: Check the logs for related errors and try to resolve the reported problem. |
122024 | ERROR | $line | Cause: The message contains the output of the "lsnrctl" command. Action: Check the logs for related errors and try to resolve the reported problem. |
122039 | ERROR | Usage error | Cause: Invalid parameters were specified for the restore operation. Action: Verify the parameters and retry the operation. |
122040 | ERROR | Script $cmd has hung on the restore of \"$opt_t\". Forcibly terminating. | Cause: The listener restore script reached its timeout value. Action: Ensure listener.ora is valid and that LSNR_START_TIME (default 35 seconds) in /etc/default/LifeKeeper is set to a value greater than or equal to the time needed to start the listener. |
122041 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: LifeKeeper was unable to restore the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122045 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: Failed to get resource information. Action: Check your LifeKeeper configuration. |
122046 | ERROR | Usage error | Cause: Invalid parameters were specified for the restore operation. Action: Verify the parameters and retry the operation. |
122049 | ERROR | The script $cmd has hung on remove of \"$opt_t\". Forcibly terminating. | Cause: The listener remove script reached its timeout value. Action: Ensure listener.ora is valid and that LSNR_STOP_TIME (default 35 seconds) in /etc/default/LifeKeeper is set to a value greater than or equal to the time needed to stop the listener. |
122051 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122055 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: LifeKeeper was unable to quickCheck the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122057 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122064 | WARN | The %s level is set to %s a %s will not occur. | Cause: The minimal Listener protection level is Start and Monitor. Action: Start the listener manually. |
122066 | ERROR | Script has hung checking \"$tag\". Forcibly terminating. | Cause: The listener quickCheck script reached its timeout value. Action: Ensure listener.ora is valid and that LSNR_STATUS_TIME (default 15 seconds) in /etc/default/LifeKeeper is set to a value greater than or equal to the time needed to check the listener. |
122067 | ERROR | Usage error | Cause: Invalid parameters were specified for the quickCheck operation. Action: Verify the parameters and retry the operation. |
122069 | ERROR | Usage error | Cause: Invalid parameters were specified for the delete operation. Action: Verify the parameters and retry the operation. |
122072 | ERROR | %s: resource "%s" not found on local server | Cause: Invalid parameters were specified for the recover operation. Action: Verify the parameters and retry the operation. |
122074 | WARN | The local recovery attempt has failed but %s level is set to %s preventing a failover to another node in the cluster. With %s recovery set all local recovery failures will exit successfully to prevent resource failovers. | Cause: The optional listener recovery level is set to local recovery only. Action: Switch over the resource tree manually. |
122078 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: LifeKeeper was unable to recover the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122082 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122083 | ERROR | $cmd has hung checking \"$tag\". Forcibly terminating | Cause: The recover script was stopped by signal. Action: Ensure listener.ora is valid. |
122084 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: LifeKeeper was unable to extend the resource {resource} on {server}. Action: Verify the parameters and retry the operation. |
122085 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the canextend operation. Action: Verify the parameters and retry the operation. |
122086 | ERROR | The values specified for the target and the template servers are the same. Please specify the correct values for the target and template servers. | Cause: The values specified for the target and the template servers are the same. Action: Perform the steps listed in the message text. |
122087 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122088 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: Failed to get listener user name from resource information. Action: Ensure the resource info field is valid then retry the operation. |
122089 | ERROR | The listener user %s does not exist on the server %s. | Cause: User name is not in passwd file. Action: Ensure the user name exists in passwd file and retry the operation. |
122090 | ERROR | The id for user %s is not the same on template server %s and target server %s. | Cause: User ID should be same on both servers. Action: Trim user ID to the same. |
122091 | ERROR | The group id for user %s is not the same on template server %s and target server %s. | Cause: Group ID should be same on both servers. Action: Trim group ID to the same. |
122092 | ERROR | Cannot access canextend script "%s" on server "%s" | Cause: LifeKeeper was unable to run pre-extend checks because it was unable to find the "canextend" script on {server}. Action: Check your LifeKeeper configuration. |
122097 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "configActions" operation. Action: Verify the arguments and retry the operation. |
122098 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122099 | ERROR | Unable to update the resource %s to change the %s to %s on %s. | Cause: LifeKeeper failed to put information into the info field. Action: Restart LifeKeeper and retry the operation. |
122100 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122101 | ERROR | Unable to update the resource %s to change the %s to %s on %s. | Cause: LifeKeeper failed to put information to info field on {server}. Action: Restart LifeKeeper on {server} and retry the operation. |
122103 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the create operation. Action: Verify the parameters and retry the operation. |
122124 | ERROR | END failed hierarchy "%s" of resource "%s" on server "%s" with return value of %d | Cause: LifeKeeper was unable to create the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122126 | ERROR | Unable to "%s" on "%s | Cause: There was an unexpected error running "rlslocks". Action: Check the logs for related errors and try to resolve the reported problem. |
122127 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: LifeKeeper was unable to create the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122129 | ERROR | Unable to "%s" on "%s | Cause: There was an unexpected error running "getlocks." Action: Check the logs for related errors and try to resolve the reported problem. |
122131 | ERROR | Error creating resource "%s" on server "%s" | Cause: LifeKeeper was unable to create the resource {resource} on {server}. Action: Check your LifeKeeper configuration. |
122133 | ERROR | Unable to create a file system resource hierarchy for the file system %s. | Cause: There was an unexpected error running "filesyshier." Action: Check adjacent log messages for further details. |
122135 | ERROR | Unable to create a dependency between parent tag %s and child tag %s. | Cause: There was an unexpected error running "dep_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122140 | ERROR | Resource "%s" is not ISP on server "%s" Manually bring the resource in service and retry the operation | Cause: IP resource {tag} which the listener resource depends on should be ISP. Action: Perform the steps listed in the message text. |
122141 | ERROR | Unable to create a dependency between parent tag %s and child tag %s. | Cause: There was an unexpected error running "dep_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122144 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the "create_ins" operation. Action: Verify the parameters and retry the operation. |
122145 | ERROR | An error has occurred in utility %s on server %s. View the LifeKeeper logs for details and retry the operation. | Cause: There was an unexpected error running "app_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122146 | ERROR | An error has occurred in utility %s on server %s. View the LifeKeeper logs for details and retry the operation. | Cause: There was an unexpected error running "typ_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122147 | ERROR | An error has occurred in utility %s on server %s. View the LifeKeeper logs for details and retry the operation. | Cause: There was an unexpected error running "newtag." Action: Check the logs for related errors and try to resolve the reported problem. |
122148 | ERROR | Error creating resource "%s" on server "%s" | Cause: LifeKeeper was unable to create the resource {resource} on {server}. Action: Check your LifeKeeper configuration. |
122149 | ERROR | An error has occurred in utility %s on server %s. View the LifeKeeper logs for details and retry the operation. | Cause: There was an unexpected error running "ins_setstate." Action: Check the logs for related errors and try to resolve the reported problem. |
122150 | ERROR | Error creating resource "%s" on server "%s" | Cause: LifeKeeper was unable to create the resource {resource} on {server}. Action: Check your LifeKeeper configuration. |
122151 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "depstoextend" operation. Action: Verify the arguments and retry the operation. |
122152 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the "extend" operation. Action: Verify the parameters and retry the operation. |
122153 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122154 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: LifeKeeper was unable to extend the resource {resource} on {server}. |
122155 | ERROR | Resource with either matching tag "%s" or id "%s" already exists on server "%s" for App "%s" and Type "%s" | Cause: During the Listener resource extension, a resource instance was found using the same {tag} and/or {id} but with a different resource application and type. Action: Resource IDs must be unique. The resource instance with the ID matching the Oracle Listener resource instance must be removed. |
122156 | ERROR | Cannot access extend script "%s" on server "%s" | Cause: LifeKeeper was unable to extend the resource hierarchy because it was unable to the find the script EXTEND on {server}. Action: Check your LifeKeeper configuration. |
122157 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "getConfigIps" operation. Action: Verify the arguments and retry the operation. |
122158 | ERROR | The file %s is not a valid listener file. The file does not contain any listener definitions. | Cause: Failed to find any listener definitions. Action: Ensure listener definition is in the listener.ora and retry the operation. |
122159 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "getSidListeners" operation. Action: Verify the arguments and retry the operation. |
122160 | ERROR | The file %s is not a valid listener file. The file does not contain any listener definitions. | Cause: Failed to find any listener definitions. Action: Ensure listener definition is in the listener.ora and retry the operation. |
122161 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "lsn-display" operation. Action: Verify the arguments and retry the operation. |
122162 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: LifeKeeper was unable to find the resource {tag} on {server}. Action: Check your LifeKeeper configuration. |
122163 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the updateHelper operation. Action: Verify the parameters and retry the operation. |
122164 | ERROR | END failed hierarchy "%s" of resource "%s" on server "%s" with return value of %d | Cause: LifeKeeper was unable to update the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122166 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the "updateHelper" operation. Action: Verify the parameters and retry the operation. |
122170 | ERROR | Unable to create a dependency between parent tag %s and child tag %s. | Cause: There was an unexpected error running "dep_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122171 | ERROR | Unable to create a dependency between parent tag %s and child tag %s. | Cause: There was an unexpected error running "dep_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122172 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "updIPDeps" operation. Action: Verify the arguments and retry the operation. |
122173 | ERROR | END failed hierarchy "%s" of resource "%s" on server "%s" with return value of %d | Cause: LifeKeeper was unable to update the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122175 | ERROR | Unable to "%s" on "%s | Cause: There was an unexpected error running "rlslocks." Action: Check the logs for related errors and try to resolve the reported problem. |
122177 | ERROR | Unable to "%s" on "%s | Cause: There was an unexpected error running "getlocks." Action: Check the logs for related errors and try to resolve the reported problem. |
122180 | ERROR | Unable to create a dependency between parent tag %s and child tag %s. | Cause: There was an unexpected error running "dep_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122181 | ERROR | Unable to create a dependency between parent tag %s and child tag %s. | Cause: There was an unexpected error running "dep_create." Action: Check the logs for related errors and try to resolve the reported problem. |
122183 | ERROR | The path %s is not a valid file. | Cause: There is no listener.ora file. Action: Ensure the file exists and retry the operation. |
122185 | ERROR | The file %s is not a valid listener file. The file does not contain any listener definitions. | Cause: LifeKeeper failed to find any valid listener definitions. Action: Ensure there are valid listener definitions in the listener.ora and retry the operation. |
122186 | ERROR | The value specified for %s cannot be empty. Please specify a value for this field. | Cause: The config and/or executable {path} field is empty. Action: Input a non-empty value for {path} and retry the operation. |
122187 | ERROR | The path %s is not a valid file or directory. | Cause: The defined {path} is invalid. Action: Ensure the {path} exists and retry the operation. |
122188 | ERROR | The path %s is not a valid file or directory. | Cause: There is no {path}. Action: Ensure the {path} exists and retry the operation. |
122189 | ERROR | The value specified for %s cannot be empty. Please specify a value for this field. | Cause: The config and/or executable Path field is empty. Action: Input path for the field. |
122190 | ERROR | Usage: %s %s | Cause: Invalid arguments were specified for the "valid_rpath" operation. Action: Verify the arguments and retry the operation. |
122191 | ERROR | The values specified for the target and the template servers are the same. | Cause: Invalid argument of valid_rpath. Action: Ensure arguments and retry the operation. |
122192 | ERROR | Unable to find the configuration file "oratab" in its default locations, /etc/oratab or %s on "%s" | Cause: There is no oratab file in /etc/oratab or {path}. Action: Ensure oratab file exists in {path} or ORACLE_ORATABLOC in /etc/default/Lifekeeper is set to a valid path. |
122193 | ERROR | Unable to find the configuration file "oratab" in its default locations, /etc/oratab or %s on "%s" | Cause: There is no oratab file in /etc/oratab or {path}. Action: Ensure oratab file exists in {path} or ORACLE_ORATABLOC in /etc/default/Lifekeeper is set to a valid path. |
122194 | ERROR | Unable to find the configuration file "oratab" in its default locations, /etc/oratab or %s on "%s" | Cause: There is no oratab file in /etc/oratab or {path}. Action: Ensure oratab file exists in {path} or ORACLE_ORATABLOC in /etc/default/Lifekeeper is set to a valid path. |
122195 | ERROR | Unable to find the configuration file "oratab" in its default locations, /etc/oratab or %s on "%s" | Cause: There is no oratab file in /etc/oratab or {path}. Action: Ensure oratab file exists in {path} or ORACLE_ORATABLOC in /etc/default/Lifekeeper is set to a valid path. |
122196 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: LifeKeeper was unable to remove the resource {resource} on {server}. Action: Check the logs for related errors and try to resolve the reported problem. |
122197 | ERROR | Unable to find the configuration file \"oratab\" in its default locations, /etc/oratab or $listener::oraTab on \"$me\" | Cause: There is no oratab file in /etc/oratab or {path}. Action: Ensure oratab file exists in {path} or ORACLE_ORATABLOC in /etc/default/Lifekeeper is set to a valid path. |
122198 | ERROR | remove for $okListener failed. | |
122251 | ERROR | Update of pluggable database info field for "%s" on "%s" failed (%s). | |
122252 | ERROR | Initial connect with query buffer to database "%s" on "%s" failed, testing output. | |
122253 | ERROR | The Oracle database "%s" is not running or no open connections are available on "%s". | |
122254 | ERROR | Failed to create object instance for Oracle on "%s". | |
122255 | ERROR | no dependency for Oracle on "%s". | |
122261 | ERROR | The Oracle resource (%s) and dependency are not set on %s. | |
122262 | ERROR | Usage: %s %s | |
122263 | ERROR | The restore of %s has timed out on server %s. The default TIMEOUT is 300 seconds. To increase the TIMEOUT, set ORACLE_RESTORE_TIMEOUT in /etc/default/LifeKeeper. | |
122264 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | |
122268 | ERROR | Failed to create object instance for Oracle on "%s". | |
122269 | ERROR | no dependency for Oracle on "%s". | |
122270 | ERROR | Unable to find the Oracle executable "%s" on "%s". | |
122271 | ERROR | Usage: %s %s | |
122272 | ERROR | The remove of %s has timed out on server %s. The default TIMEOUT is 300 seconds. To increase the TIMEOUT, set ORACLE_REMOVE_TIMEOUT in /etc/default/LifeKeeper. | |
122273 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | |
122277 | ERROR | Usage: %s %s | |
122278 | ERROR | Failed to create object instance for Oracle on "%s". | |
122279 | ERROR | no dependency for Oracle on "%s". | |
122280 | ERROR | Unable to find the Oracle executable "%s" on "%s". | |
122281 | ERROR | The quickCheck of %s has timed out on server %s. The default TIMEOUT is 45 seconds. To increase the TIMEOUT, set ORACLE_QUICKCHECK_TIMEOUT in /etc/default/LifeKeeper. | |
122282 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | |
122284 | ERROR | Failed to create object instance for Oracle on "%s". | |
122285 | ERROR | no dependency for Oracle on "%s". | |
122287 | ERROR | Unable to find the Oracle executable "%s" on "%s". | |
122288 | ERROR | Usage: %s %s | |
122291 | ERROR | Cannot extend resource "%s" to server "%s" | |
122292 | ERROR | The values specified for the target and the template servers are the same: "%s". | |
122294 | ERROR | Cannot access canextend script "%s" on server "%s" | |
122295 | ERROR | Usage: %s %s | |
122296 | ERROR | DB instance "%s" is not protected on "%s". | |
122297 | ERROR | Failed to create object instance for OraclePDB on "%s". | |
122298 | ERROR | Unable to locate the oratab file "%s" on "%s". | |
122299 | ERROR | END failed hierarchy "%s" of resource "%s" on server "%s" with return value of %d | |
122301 | ERROR | Unable to "%s" on "%s" during resource create. | |
122302 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | |
122304 | ERROR | Unable to "%s" on "%s" during resource create. | |
122305 | ERROR | Unable to determine Oracle user for "%s" on "%s". | |
122306 | ERROR | Error creating resource "%s" on server "%s" | |
122308 | ERROR | Dependency creation between Oracle pluggable database "%s (%s)" and the dependent resource "%s" on "%s" failed. Reason | |
122309 | ERROR | %s | |
122311 | ERROR | In-service attempted failed for tag "%s" on "%s". | |
122312 | ERROR | Usage: %s %s | |
122313 | ERROR | Create of app "%s" on "%s" failed with return code of "%d". | |
122314 | ERROR | Create of typ "%s" for app "%s" on "%s" failed with return code of "%d". | |
122316 | ERROR | Create of resource tag via "newtag" on "%s" failed. | |
122318 | ERROR | Error creating resource "%s" on server "%s" | |
122320 | ERROR | Setting "resstate" for resource "%s" on "%s" failed with return code of "%d". | |
122321 | ERROR | Error creating resource "%s" on server "%s" | |
122322 | ERROR | Usage: %s %s | |
122323 | ERROR | Usage: %s %s | |
122324 | ERROR | Usage: %s %s | |
122325 | ERROR | Cannot extend resource "%s" to server "%s" | |
122326 | ERROR | Resource with either matching tag "%s" or id "%s" already exists on server "%s" for App "%s" and Type "%s" | |
122327 | ERROR | Error creating resource "%s" on server "%s" | |
122328 | ERROR | Cannot access extend script "%s" on server "%s" | |
122329 | ERROR | Cannot extend resource "%s" to server "%s" | |
122330 | ERROR | Usage: %s %s | |
122331 | ERROR | Failed to create object instance for OraclePDB on "%s". | |
122332 | ERROR | Usage: %s %s | |
122334 | ERROR | Backup node %s is unreachable; abort protection PDB changes. | |
122336 | ERROR | Update of protection PDB failed for "%s" on "%s". | |
122339 | ERROR | Usage: %s %s | |
122340 | ERROR | Usage: %s %s | |
122341 | ERROR | Usage: %s %s | |
122342 | ERROR | Failed to create object instance for OraclePDB on "%s". | |
122343 | ERROR | Usage: %s %s | |
122344 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | |
122348 | ERROR | Failed to create flag "%s" on "%s". | |
122350 | ERROR | Failed to create object instance for Oracle on "%s". | |
122351 | ERROR | no dependency for Oracle on "%s". | |
122353 | ERROR | Unable to find the Oracle executable "%s" on "%s". | |
122356 | ERROR | Usage: %s %s | |
122357 | ERROR | Failed to create object instance for OraclePDB on "%s". | |
122358 | ERROR | The selected oracle SID "%s" is not a CDB. | |
122359 | ERROR | No protectable PDB found for the selected SID "%s". | |
122360 | ERROR | No protected Oracle database found on "%s". | |
122500 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the create operation. Action: Verify the parameters are correct and retry the operation. |
122501 | ERROR | DB instance "%s" is already protected on "%s". | Cause: An attempt was made to protect an Oracle database instance {sid} that is already under LifeKeeper protection on {server}. Action: You must select a different database instance {sid} for LifeKeeper protection. |
122502 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122503 | ERROR | Unable to locate the oratab file "%s" on "%s". | Cause: The oratab file was not found at the default or alternate locations on {server}. Action: Verify the oratab file exists and has proper permissions for the Oracle user. A valid oratab file is required to complete the "create" operation. |
122504 | ERROR | Unable to determine Oracle user for "%s" on "%s". | Cause: The Oracle Application Recovery Kit was unable to determine the ownership of the Oracle database installation binaries. Action: The owner of the Oracle binaries must be a valid non-root user on {server}. Correct the permissions and ownership of the Oracle database installation and retry the operation. |
122505 | ERROR | The Oracle database "%s" is not running or no open connections are available on "%s". | Cause: The database instance {sid} was not running or connections to the database were not available via the credentials provided. Action: The database instance {sid} must be started on {server} and the proper credentials must be provided for the completion of the "create" operation. |
122506 | ERROR | Unable to determine Oracle dbspaces and logfiles for "%s" on "%s". | Cause: A query to determine the location of required tablespaces, logfiles and related database files failed. This may have been caused by an internal database error. Action: Check the adjacent log messages for further details and related errors. Check the Oracle log (alert.log) and related trace logs (*.trc) for additional information and correct the reported problem(s). |
122507 | ERROR | Unknown chunk type found for "%s" on "%s". | Cause: The specified tablespace, logfile or other required database file is not one of the LifeKeeper supported file or character device types. Action: The specified file {database_file} must reference an existing character device or file. Consult the Oracle installation documentation to recreate the specified file {database_file} as a supported file or character device type. |
122508 | ERROR | DB Chunk "%s" for "%s" on "%s" does not reside on a shared file system. | Cause: The specified tablespace, logfile or other required database file {database_file} does not reside on a file system that is shared with other systems in the cluster. Action: Use the LifeKeeper UI or "lcdstatus (1M)" to verify that communication paths have been properly created. Use "rpm" to verify that the necessary Application Recovery Kits for storage protection have been installed. Verify that the file is, in fact, not on shared storage, and if not, move it to a shared storage device. |
122510 | ERROR | File system create failed for "%s" on "%s". Reason | Cause: LifeKeeper was unable to create the resource {filesystem} on the specified server {server}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the "create" operation. |
122511 | ERROR | %s | Cause: The message contains the output of the "filesyshier" command. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors. |
122513 | ERROR | Dependency creation between Oracle database "%s (%s)" and the dependent resource "%s" on "%s" failed. Reason | Cause: LifeKeeper was unable to create a dependency between the database resource {tag} and the necessary child resource {childtag}. Action: Check adjacent log messages for further details and related messages. Once any problems have been corrected, it may be possible to create the dependency between {tag} and {childtag} manually. |
122514 | ERROR | Unable to "%s" on "%s" during resource create. | Cause: The Oracle Application Recovery Kit was unable to release the administrative lock using the "rlslocks" command. Action: Check adjacent log messages for further details and related messages. |
122516 | ERROR | Raw device resource created failed for "%s" on "%s". Reason | Cause: LifeKeeper was unable to create the resource {raw device} on the specified server {server}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the "create" operation. |
122519 | ERROR | In-service attempted failed for tag "%s" on "%s". | Cause: The "perform_action" command for {tag} on {server} failed to start the database {sid}. The in-service operation has failed. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the "create" operation. |
122521 | ERROR | Create of app "%s" on "%s" failed with return code of "%d". | Cause: There was an error running the command "app_create" to create the internal application type. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122522 | ERROR | Create of typ "%s" for app "%s" on "%s" failed with return code of "%d". | Cause: There was an error running the command "typ_create" to create the internal resource type. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122524 | ERROR | Setting "resstate" for resource "%s" on "%s" failed with return code of "%d". | Cause: There was an error running the command "ins_setstate" to set the resource state to {state}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122525 | ERROR | The values specified for the target and the template servers are the same: "%s". | Cause: The value specified for the target and template servers for the "extend" operation were the same. Action: You must specify the correct parameter for the {target server} and {template server}. The {target server} is the server where the {tag} will be extended. |
122526 | ERROR | Unable to locate the oratab file in "/etc" or in "%s" on "%s". | Cause: The oratab file was not found at the default or alternate locations on {server}. Action: Verify the oratab file exists and has proper permissions for the Oracle user. A valid oratab file is required to complete the "extend" operation. |
122527 | ERROR | Unable to retrieve the Oracle user on "%s". | Cause: An attempt to retrieve the Oracle user from {template server} during a "canextend" or "extend" operation failed. Action: The owner of the Oracle binaries must be a valid user on {target server} and {template server}. Correct the permissions and ownership of the Oracle database installation and retry the operation. |
122528 | ERROR | The Oracle user and/or group information for user "%s" does not exist on the server "%s". | Cause: LifeKeeper is unable to find the Oracle user and/or group information for the Oracle user {user} on the server {server}. Action: Verify the Oracle user {user} exists on the specified {server}. If the user {user} does not exist, it should be created with the same uid and gid on all servers in the cluster. |
122529 | ERROR | The id for user "%s" is not the same on template server "%s" and target server "%s". | Cause: The user id on the target server {target server} for the Oracle user {user} does not match the value of the user {user} on the template server {template server}. Action: The user ids for the Oracle user {user} must match on all servers in the cluster. The user id mismatch should be corrected manually on all servers before retrying the "extend" operation. |
122530 | ERROR | The group id for user "%s" is not the same on template server "%s" and target server "%s". | Cause: The group id on the target server {target server} for the Oracle user {user} does not match the value of the user {user} on the template server {template server}. Action: The group ids for the Oracle user {user} must match on all servers in the cluster. The group id mismatch should be corrected manually on all servers before retrying the "extend" operation. |
122532 | ERROR | No file system or raw devices found to extend for "%s" on "%s". | Cause: There were no dependent file system or raw device resources found for the Oracle resource {tag} on server {template server}. Action: Check adjacent log messages for further details and related messages. |
122533 | WARN | A RAMDISK (%s) was detected in the ORACLE Database configuration for "%s" on "%s". LifeKeeper cannot protect RAMDISK. This RAMDISK resource will not be protected by LifeKeeper! ORACLE hierarchy creation will continue. | Cause: The specified tablespace, logfile or other database file {database_file} was detected as a ramdisk. No protection is available for this type of resource in the current LifeKeeper product. Action: The ramdisk will not be protected. You must manually ensure that the required database file {database_file} will be available during all Oracle database operations. |
122534 | ERROR | Failed to initialize object instance for Oracle sid "%s" on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122537 | ERROR | Update of instance info field for "%s" on "%s" failed (%s). | Cause: There was an error while running the command "ins_setinfo" to update the internal resource information field. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122538 | ERROR | Initial connect with query buffer to database "%s" on "%s" failed, testing output. | Cause: A connection attempt to the Oracle database {sid} to determine the database status has failed. Action: The connection attempt failed with the specified credentials. Check the adjacent log messages for further details and related errors. Check the Oracle log (alert.log) and related trace logs (*.trc) for additional information and correct the reported problem(s). |
122542 | ERROR | The "%s [ %s ]" attempt of the database "%s" appears to have failed on "%s". | Cause: The attempted Oracle action {action} using method {action_method} for the database instance {sid} failed on the server {server}. Action: Check the adjacent log messages for further details and related errors. Check the Oracle log (alert.log) and related trace logs (*.trc) for additional information and correct the reported problem(s). |
122543 | ERROR | All attempts to "%s" database "%s" on "%s" failed | Cause: All efforts to perform the action {action} on the Oracle database {sid} on server {server} have failed. Action: Check the adjacent log messages for further details and related errors. Check the Oracle log (alert.log) and related trace logs (*.trc) for additional information and correct the reported problem(s). |
122544 | ERROR | Update of "%s" sid "%s" on "%s" failed. Reason: "%s" "%s" failed: "%s". | Cause: An unexpected error occurred while attempting to update the oratab entry for the database {sid}. The error occurred while attempting to open the oratab file. Action: The oratab file entry for {sid} will need to be updated manually to turn off the automatic start up of the database at system boot. |
122545 | ERROR | Unable to locate the oratab file in "/etc" or in "%s" on "%s". | Cause: The oratab file was not found at the default or alternate locations on {server}. Action: Verify the oratab file exists and has proper permissions for the oracle user. A valid oratab file is required to complete the "extend" operation. |
122546 | ERROR | Unable to open file "%s" on "%s" (%s). | Cause: The specified file {file} could not be opened or accessed on the server {server} due to the error {error}. Action: Verify the existence and permissions on the specified file {file}. Check adjacent log messages for further details and related errors. You must correct any reported errors before retrying the operation. |
122547 | ERROR | (cleanUpPids):Forcefully killing hung pid(s):pid(s)="%s" | Cause: The process {pid} failed to respond to the request to terminate gracefully. The process {pid} will be forcefully terminated. Action: Use the command line to verify that the process {pid} has been terminated. Check the adjacent log messages for further details and related messages. |
122548 | ERROR | Unable to locate the DB utility (%s/%s) on this host. | Cause: The Oracle binaries and required database utility {utility} located at {path/utility} were not found on this server {server}. Action: Verify that the Oracle binaries and required software utilities are installed and properly configured on the server {server}. The Oracle binaries must be installed locally on each node or located on shared storage available to all nodes in the cluster. |
122549 | ERROR | Oracle internal error or non-standard Oracle configuration detected. Oracle User and/or Group set to "root". | Cause: The detected ownership of the Oracle database installation resolves to the root user and/or root group. Ownership of the Oracle installation by root is a non-standard configuration. Action: The owner of the Oracle binaries must be a valid non-root user on {server}. Correct the permissions and ownership of the Oracle database installation and retry the operation. |
122550 | ERROR | Initial inspection of "%s" failed, verifying failure or success of received output. | Cause: The previous Oracle query {query} or command {cmd} failed to return success. Action: Check the adjacent log messages for further details and related errors. Check the Oracle log (alert.log) and related trace logs (*.trc) for additional information and correct the reported problem(s). |
122551 | ERROR | Logon failed with "%s" for "%s" on "%s". Please check username/password and privileges. | Cause: The logon with the credentials {credentials} for the database instance {sid} on server {server} failed. An invalid user {user} or password was specified. Action: Verify that the Oracle database user {user} and password {password} are indeed valid. In addition, the Oracle database user {user} must have sufficient privileges for the attempted action. |
122552 | ERROR | %s | Cause: The message contains the output of the "sqlplus" command. Action: Check adjacent log messages for further details and related messages. |
122553 | ERROR | Unable to open file "%s" on "%s" (%s). | Cause: The specified file {file} could not be opened or accessed on the server {server} due to the error {error}. Action: Verify the existence and permissions on the specified file {file}. Check adjacent log messages for further details and related errors. You must correct any reported errors before retrying the operation. |
122554 | ERROR | The tag "%s" on "%s" is not an Oracle instance or it does not exist. | Cause: The specified tag {tag} on server {server} does not refer to an existing and valid Oracle resource instance. Action: Use the UI or "lcdstatus (1M)" to verify the existence of the resource tag {tag}. The resource tag {tag} must be an Oracle resource instance to use the command "ora-display." |
122555 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected while attempting to update the authorized user, password and database role for the Oracle resource instance. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122557 | ERROR | Update of user and password failed for "%s" on "%s". | Cause: A request to update the user and password for the resource tag {tag} failed. The specified credentials failed the initial validation/connection attempt on server {server}. Action: Verify the correct credentials {user/password} were specified for the attempted operation. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122559 | ERROR | Update of user and password failed for "%s" on "%s". | Cause: The update of the user and password information for the resource tag {tag} on server {server} failed. Action: Verify the correct credentials {user/password} were specified for the attempted operation. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122562 | ERROR | Unable to find the Oracle executable "%s" on "%s". | Cause: The required Oracle executable {exe} was not found on this server {server}. Action: Verify that the Oracle binaries and required software utilities are installed and properly configured on the server {server}. The Oracle binaries must be installed locally on each node or located on shared storage available to all nodes in the cluster. |
122566 | ERROR | Unable to find Oracle home for "%s" on "%s". | Cause: The Oracle home directory {Oracle home} does not appear to contain files necessary for the proper operation of the Oracle instance {sid}. Action: Verify using the command line that the Oracle home directory {Oracle home} contains the Oracle binaries, a valid spfile{sid}.ora or init{sid}.ora file. |
122567 | ERROR | Oracle SID mismatch. The instance SID "%s" does not match the SID "%s" specified for the command. | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected. The specified internal ID {id} does not match the expected SID {sid}. Action: Verify the parameters are correct. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122568 | ERROR | DB Processes are not running on "%s". | Cause: A process check for the Oracle instance did not find any processes running on server {server}. Action: If local recovery is enabled, the Oracle instance will be restarted locally. Check adjacent log messages for further details and related messages. |
122572 | ERROR | Failed to create flag "%s" on "%s". | Cause: An unexpected error occurred attempting to create a flag for controlling Oracle local recovery processing causing a failover to the standby node. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors. |
122574 | ERROR | all attempts to shutdown the database %s failed on "%s". | Cause: The shutdown of the Oracle database failed during a local recovery process most likely caused because the maximum number of database connections has been reached. Action: Check the Oracle logs for connection failures caused by the maximum number of available connections being reached, and if found, consider increasing the value. Additionally, set the tunable LK_ORA_NICE to 1 to prevent connection failures from causing a quickCheck failure followed by a local recovery attempt. |
122597 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected during pre-extend checking. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors |
122598 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being created while attempting to determine the validity of the Oracle home directory. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors |
122599 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected while attempting to look up the Oracle user on the template system. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors |
122600 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected while attempting to display the resource properties. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors before retrying the display of the resource properties. |
122601 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected while attempting to check for valid database authorization. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors before retrying the command. |
122603 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected while attempting to perform health checks on the Oracle resource instance. Action: Check that correct arguments were passed to the quickCheck command and also check the adjacent log messages for further details and related messages. Correct any reported errors before retrying the restore. |
122604 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected while attempting to perform a local recovery on the Oracle resource instance. Action: Check that correct arguments were passed to the "recover" command, and also check the adjacent log messages for further details and related messages. Correct any reported errors before retrying the recover. |
122606 | ERROR | The Oracle database "%s" is not running or no open connections are available on "%s". | Cause: The database instance {sid} was not running or connections to the database are not available via the credentials provided. Action: The database instance {sid} must be started on {server} and the proper credentials must be provided for the completion of the selected operation. |
122607 | ERROR | The Oracle database "%s" is not running or no open connections are available on "%s". | Cause: The database instance {sid} was not running or connections to the database are not available via the credentials provided. Action: The database instance {sid} must be started on {server} and the proper credentials must be provided for the completion of the selected operation. |
122608 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: The "remove" operation failed to create the resource object instance required to take the Oracle resource Out of Service. Action: Check that correct arguments were passed to the "remove" command and also check the adjacent log messages for further details and related messages. Correct any reported errors before retrying the restore. |
122609 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: The "restore" operation failed to create the resource object instance required to put the Oracle resource In Service. Action: Check that correct arguments were passed to the "restore" command and also check the adjacent log messages for further details and related messages. Correct any reported errors before retrying the "restore." |
122610 | ERROR | Unable to "%s" on "%s" during resource create. | Cause: The Oracle Application Recovery Kit was unable to create the administrative lock using the "getlocks" command during resource creation. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the create. |
122611 | ERROR | %s | Cause: The requested dependency creation between the parent Oracle resource and the child File System resource failed. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the create operation. |
122612 | ERROR | %s | Cause: The requested dependency creation between the parent Oracle resource and the child Raw resource failed. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the create operation. |
122613 | ERROR | %s | Cause: The requested dependency creation between the parent Oracle resource and the child Raw resource failed. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the create operation. |
122614 | ERROR | %s | Cause: The requested dependency creation between the parent Oracle resource and the child Listener resource failed. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the create operation. |
122616 | ERROR | %s | Cause: The requested start up or shutdown of the Oracle database failed. Action: Check adjacent log messages for further details and related messages. Correct any reported errors before retrying the "restore" or "remove" operation. |
122618 | ERROR | Dependency creation between Oracle database "%s (%s)" and the dependent resource "%s" on "%s" failed. Reason | Cause: LifeKeeper was unable to create a dependency between the database resource {tag} and the necessary child resource {childtag}. Action: Check adjacent log messages for further details and related messages. Once any problems have been corrected, it may be possible to create the dependency between {tag} and {childtag} manually. |
122619 | ERROR | Dependency creation between Oracle database "%s (%s)" and the dependent resource "%s" on "%s" failed. Reason | Cause: LifeKeeper was unable to create a dependency between the database resource {tag} and the necessary child resource {childtag}. Action: Check adjacent log messages for further details and related messages. Once any problems have been corrected, it may be possible to create the dependency between {tag} and {childtag} manually. |
122625 | ERROR | Unable to find the Oracle executable "%s" on "%s". | Cause: The quickCheck process was unable to find the Oracle executable "sqlplus." Action: Check the Oracle configuration and also check adjacent log messages for further details and related messages. Correct any reported problems. |
122626 | ERROR | Unable to find the Oracle executable "%s" on "%s". | Cause: The remove process was unable to find the Oracle executable "sqlplus." Action: Check the Oracle configuration and also check adjacent log messages for further details and related messages. Correct any reported problems. |
122627 | ERROR | Unable to find the Oracle executable "%s" on "%s". | Cause: The restore process was unable to find the Oracle executable "sqlplus." Action: Check the Oracle configuration and also check adjacent log messages for further details and related messages. Correct any reported problems. |
122628 | ERROR | Unable to find the Oracle executable "%s" on "%s". | Cause: The recover process was unable to find the Oracle executable "sqlplus." Action: Check the Oracle configuration and also check adjacent log messages for further details and related messages. Correct any reported problems. |
122632 | ERROR | Oracle SID mismatch. The instance SID "%s" does not match the SID "%s" specified for the command. | Cause: During a remove, the resource instance {sid} passed to the remove process does not match internal resource instance information for the {sid}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122633 | ERROR | Oracle SID mismatch. The instance SID "%s" does not match the SID "%s" specified for the command. | Cause: During a restore, the resource instance {sid} passed to restore does not match internal resource instance information for the {sid}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122634 | ERROR | Oracle SID mismatch. The instance SID "%s" does not match the SID "%s" specified for the command. | Cause: During resource recovery, the resource instance {sid} passed to recovery does not match internal resource instance information for the {sid}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122636 | ERROR | END failed hierarchy "%s" of resource "%s" on server "%s" with return value of %d | Cause: The create of the Oracle resource hierarchy {tag} failed on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122638 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: The create action for the Oracle database resource {tag} on server {server} failed. The signal {sig} was received by the create process. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122640 | ERROR | Error creating resource "%s" on server "%s" | Cause: An unexpected error occurred attempting to create the Oracle resource instance {tag} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122641 | ERROR | Error creating resource "%s" on server "%s" | Cause: An unexpected error occurred attempting to create the Oracle resource instance {tag} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122642 | ERROR | Error creating resource "%s" on server "%s" | Cause: An unexpected error occurred attempting to create the Oracle resource instance {tag} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122643 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: LifeKeeper was unable to extend the resource {resource} on {server}. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors. |
122644 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: An unexpected error occurred attempting to retrieve resource instance information for {tag} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors and retry the extend. |
122645 | ERROR | Cannot access canextend script "%s" on server "%s" | Cause: LifeKeeper was unable to run pre-extend checks because it was unable to find the "canextend" script on {server} for a dependent child resource. Action: Check your LifeKeeper configuration. |
122646 | ERROR | Error getting resource information for resource "%s" on server "%s" | Cause: An unexpected error occurred attempting to retrieve resource instance information for {tag} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors and retry the extend. |
122647 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: LifeKeeper was unable to extend the resource {resource} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122648 | ERROR | Resource with either matching tag "%s" or id "%s" already exists on server "%s" for App "%s" and Type "%s" | Cause: During the database resource extension, a resource instance was found using the same {tag} and/or {id} but with a different resource application and type. Action: Resource IDs must be unique. The resource instance with the ID matching the Oracle resource instance must be removed. |
122649 | ERROR | Error creating resource "%s" on server "%s" | Cause: An unexpected error occurred attempting to create the Oracle resource instance {tag} on {server}. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
122650 | ERROR | Cannot access extend script "%s" on server "%s" | Cause: The request to extend the database resource {resource} to {server} failed because it was unable to the find the script {extend} on {server} for a dependent child resource. Action: Check your LifeKeeper configuration. |
122651 | ERROR | Cannot extend resource "%s" to server "%s" | Cause: The request to extend the database resouce {resource} to {server} failed because of an error attempting to extend a dependent child resource. Action: Check the adjacent log messages for further details and related messages. Correct any reported errors. |
122654 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: The health check for the database {sid} was terminated because the quickCheck process received a signal. This is most likely caused by the quickCheck process requiring more time to complete than was allotted. Action: The health check time for an Oracle resource is controlled by the tunable value ORACLE_QUICKCHECK_TIMEOUT. Set it to a value greater than 45 seconds to allow more time for the health check process to complete. |
122655 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: The request to take database {sid} "Out of Service" was terminated because the remove process received a signal. This is most likely caused by the remove process requiring more time to complete than was allotted. Action: The remove time for an Oracle resource is controlled by the tunable value ORACLE_REMOVE_TIMEOUT. Set the tunable to a value greater than 240 seconds to allow more time for the remove process to complete. |
122659 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: The request to place database {sid} "In Service" was terminated because the restore process received a signal. This is most likely caused by the restore process requiring more time to complete than was allotted. Action: The restore time for an Oracle resource is controlled by the tunable value ORACLE_RESTORE_TIMEOUT. Set the tunable to a value greater than 240 seconds to allow more time for the restore process to complete. |
122663 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: The recovery of the failed database was terminated because the recovery process received a signal. This is most likely caused by the recovery process requiring more time to complete than was allotted. Action: The recovery time for an Oracle resource is controlled by the tunable values ORACLE_RESTORE_TIMEOUT and ORACLE_REMOVE_TIMEOUT. Set one or both of these to a value greater than 240 seconds to allow more time for a recovery to complete. |
122670 | ERROR | Update of "%s" sid "%s" on "%s" failed. Reason: "%s" "%s" failed: "%s". | Cause: An unexpected error occurred while attempting to update the oratab entry for the database {sid}. The error occurred while attempting to open the temporary file used in the update process. Action: The oratab file entry for {sid} will need to be updated manually to turn off the automatic start up of the database at system boot. |
122671 | ERROR | Update of "%s" sid "%s" on "%s" failed. Reason: "%s" "%s" failed: "%s". | Cause: An unexpected error occurred while attempting to update the oratab entry for the database {sid}. The error occurred while attempting to close the temporary file used in the update process. Action: The oratab file entry for {sid} will need to be updated manually to turn off the automatic start up of the database at system boot. |
122672 | ERROR | Update of "%s" sid "%s" on "%s" failed. Reason: "%s" "%s" failed: "%s". | Cause: An unexpected error occurred while attempting to update the oratab entry for the database {sid}. The error occurred while attempting to rename the temporary file back to oratab. Action: The oratab file entry for {sid} will need to be updated manually to turn off the automatic start up of the database at system boot. |
122673 | ERROR | Unable to log messages queued while running as oracle user %s on %s. Reason: $! | Cause: An unexpected error {reason} occurred while attempting to add messages to the log file. These messages were generated while running as the Oracle user. Action: Review the reason for the failure and take corrective action. |
122674 | ERROR | Unable to open %s Reason: %s. | Cause: An unexpected error occurred while attempting to open a connection to the Oracle database and run the database {cmd}. Action: Check adjacent log messages for further details and related messages. Additionally, check the Oracle log (alert.log) and related trace logs (*.trc) for additional information and correct any reported problems. |
122680 | ERROR | Unable to find %s/dbs/init%s.ora, %s/admin/%s/pfile, or spfile for %s on %s | Cause: The Oracle home directory {Oracle home} does not appear to contain files necessary for the proper operation of the Oracle instance {sid}. Action: Verify using the command line that the Oracle home directory {Oracle home} contains the Oracle binaries, a valid spfile{sid}.ora or init{sid}.ora file. |
122681 | ERROR | Failed to create object instance for Oracle on "%s". | Cause: There was an unexpected error creating an internal representation of the Oracle instance being protected. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
122682 | ERROR | Unable to find the Oracle executable "%s" on "%s". | Cause: The required Oracle executable {exe} was not found on this server {server}. Action: Verify that the Oracle binaries and required software utilities are installed and properly configured on the server {server}. The Oracle binaries must be installed locally on each node or located on shared storage available to all nodes in the cluster. |
122683 | ERROR | Backup node %s is unreachable; abort username/password changes. | Cause: Backup node {node} is currently unreachable. Pending changes to username/password have been aborted. Action: Ensure that SIOS LifeKeeper is running on the given backup node {node} and that all communication paths are up, then retry the operation. |
122684 | ERROR | The restore of %s has timed out on server %s. The default TIMEOUT is 300 seconds. To increase the TIMEOUT, set ORACLE_RESTORE_TIMEOUT in /etc/default/LifeKeeper. | Cause: The restore for Oracle resource {resource} has timed out on server {server}. This occurs when the in-service operation has taken longer than the time allotted. Action: The restore time for an Oracle resource is controlled by the tunable value ORACLE_RESTORE_TIMEOUT. Set the tunable to a value greater than 240 seconds to allow more time for the restore process to complete. |
122685 | ERROR | The remove of %s has timed out on server %s. The default TIMEOUT is 300 seconds. To increase the TIMEOUT, set ORACLE_REMOVE_TIMEOUT in /etc/default/LifeKeeper. | Cause: The remove for Oracle resource {resource} has timed out on server {server}. This occurs when the out-of-service operation has taken longer than the time allotted. Action: The remove time for an Oracle resource is controlled by the tunable value ORACLE_REMOVE_TIMEOUT. Set the tunable to a value greater than 240 seconds to allow more time for the remove process to complete. |
122686 | ERROR | The quickCheck of %s has timed out on server %s. The default TIMEOUT is 45 seconds. To increase the TIMEOUT, set ORACLE_QUICKCHECK_TIMEOUT in /etc/default/LifeKeeper. | Cause: The quickCheck for Oracle resource {resource} has timed out on server {server}. This occurs when the health check has taken longer than the time allotted. Action: The health check time for an Oracle resource is controlled by the tunable value ORACLE_QUICKCHECK_TIMEOUT. Set it to a value greater than 45 seconds to allow more time for the health check process to complete. |
122687 | ERROR | Usage: %s %s | Cause: Invalid parameters were specified for the getalertlog script. Action: The tag name of the Oracle resource must be provided to the script as the first command line parameter. Verify the parameters are correct and retry the operation. |
122702 | ERROR | Failed start OHAS. | |
122704 | ERROR | Failed start ASM. | |
122706 | ERROR | Failed start Diskgroup. | |
122708 | ERROR | Failed stop Diskgroup. | |
122710 | ERROR | Failed stop ASM. | |
122712 | ERROR | Failed stop OHAS. | |
122713 | ERROR | OHAS is not running. | |
122714 | ERROR | ASM is not running. | |
122715 | ERROR | Diskgroup is not running. | |
122716 | ERROR | $usage | |
122717 | ERROR | Template resource \"$template_tag\" on server \"$template_sys\" does note exist | |
122718 | ERROR | Cannot extend resource \"$template_tag\" to server \"$me\" | |
122719 | ERROR | $usage | |
122720 | ERROR | END failed create of \"$Tag\" due to a \"$sig\" signal | |
122722 | ERROR | Unable to getlocks on $me during resouce create. | |
122725 | ERROR | Error creating resource $Tag. Error ($rc | |
122726 | ERROR | END failed hierarchy create of resource $Tag with return value of $ecode. | |
122728 | ERROR | Unable to rlslocks on $me during resource create. | |
122729 | ERROR | $usage | |
122730 | ERROR | END failed extend of \"$Tag\" due to a \"$sig\" signal | |
122732 | ERROR | Template resource \"$TemplateTag\" on server \"$TemplateSys\" does not exist | |
122733 | ERROR | Error creating resouce \"$Tag\"on server \"$me\" | |
122735 | ERROR | END failed hierarchy extend of resource $Tag with return value of $ecode. | |
122736 | ERROR | $usage | |
122737 | ERROR | Failed to create object instance for Oracle ASM on $oracle::me | |
122738 | ERROR | Usage : $cmd $usage | |
122739 | ERROR | Failed to create object instance for Oracle ASM on \"$me\". | |
122740 | ERROR | $usage | |
122742 | ERROR | backup node $back_dead_name is unreachable; abort protection Diskgroup changes. | |
122744 | ERROR | Update of protection Diskgroup failed for \"$tag\" on \"$oarcleasm::me\" | |
122746 | ERROR | Update of protection Diskgroup failed foor \"$tag\" on \"$remote\" | |
122750 | ERROR | Failed to init the object. | |
122751 | ERROR | Usage: %s %s | |
123006 | FATAL | Unknown version %s of IP address | Cause: The IP address does not appear to be valid for either IPv4 or IPv6. Action: Provide a valid IP address. |
123008 | ERROR | No pinglist found for %s. | Cause: Problem while opening the pinglist for this IP address. Action: Make sure you have provided a pinglist for this IP address. |
123009 | ERROR | List ping test failed for virtual IP %s | Cause: No response was received from any of the addresses in the ping list. Action: Check network connectivity of this node and the systems on which the IPs in the ping list reside. |
123013 | ERROR | Link check failed for virtual IP %s on interface %s. | Cause: The requested interface is showing ‘NO-CARRIER’ indicating that no link is present on the phycial layer connection. Action: Check the physical connections for the interface and bring the physical layer link up. |
123015 | ERROR | Link check failed for virtual IP %s on interface %s. | Cause: The requested interface is a bonded interface, and one of the slaves is showing ‘NO-CARRIER’ indicating that no link is present on the phycial layer connection. Action: Check the physical connections for the slave interface and bring the physical layer link up. |
123024 | ERROR | IP address seems to still exist somewhere else. | Cause: The IP address appears to be in use elsewhere on the network. Action: Either select a different IP address to use or locate and disable the current use of this IP address. |
123037 | ERROR | must specify machine name containing primary hierarchy | Cause: Not enough arguments were provided to creIPhier. Action: Supply all of the needed arguments to creIPhier. |
123038 | ERROR | must specify IP resource name | Cause: Not enough arguments were passed to creIPhier. Action: Supply all of the needed arguments to creIPhier. |
123039 | ERROR | must specify primary IP Resource tag | Cause: The argument specifying the primary IP Resource tag was missing from the "creIPhier" command. Action: Supply all of the needed arguments. |
123042 | ERROR | An unknown error has occurred in utility validmask on machine %s. | Cause: There was an unexpected error running the "validmask" utility. Action: Check adjacent log messages for additional details. |
123045 | ERROR | An unknown error has occurred in utility getlocks. | Cause: There was an unexpected error running the "getlocks" utility. Action: Check adjacent log messages for additional details. |
123053 | ERROR | Cannot resolve hostname %s | Cause: A hostname was provided for the IP address, but the system was unable to resolve the name to an IP address. Action: Check the correctness of the hostname and verify that name resolution (DNS or /etc/hosts) is working correctly and returns the IP for the hostname. |
123055 | ERROR | An unknown error has occurred in utility %s on machine %s. | Cause: There was a failure while creating the IP resource. Action: Check the logs for related errors and try to resolve the reported problem. |
123056 | ERROR | create ip hierarchy failure: perform_action failed | Cause: Unexpected error trying to restore the IP address during creation. Action: Check adjacent log messages for additional details. |
123059 | ERROR | Resource already exists on machine %s | Cause: Attempted to create an IP address that already exists. Action: Reuse the existing resource or manually remove the IP address if it exists or use a different IP address. |
123060 | ERROR | ins_create failed on machine %s | Cause: An unexpected failure occurred while creating an IP resource. Action: Check adjacent log messages for further details. |
123064 | ERROR | An unknown error has occurred in utility %s on machine %s. | Cause: There was a failure while creating a dependency for the IP resource. Action: Check the logs for related errors and try to resolve the reported problem. |
123066 | ERROR | An error occurred during creation of LifeKeeper application=comm on %s. | Cause: A failure occured while calling "app_create." Action: Check the logs for related errors and try to resolve the reported problem. |
123068 | ERROR | An error occurred during creation of LifeKeeper resource type=ip on %s. | Cause: A failure occured while calling "typ_create." Action: Check the logs for related errors and try to resolve the reported problem. |
123089 | ERROR | Link check failed for virtual IP %s on interface %s. | Cause: The command "nmcli connection show —active" did not return the interface is active. Action: Check the error log. |
123091 | ERROR | the link for interface %s is down | Cause: The requested interface is showing ‘NO-CARRIER’ indicating that no link is present on the phycial layer connection. Action: Check the physical connections for the interface and bring the physical layer link up. |
123093 | ERROR | the ping list check failed | Cause: No response was received from any of the addresses in the ping list. Action: Check network connectivity of this node and the systems on which the IPs in the ping list reside. |
123094 | ERROR | IP address is not assigned to interface %s. | |
123095 | ERROR | broadcast ping failed | Cause: No replies were received from a broadcast ping. Action: Verify that at least one host on the subnet will respond to broadcast pings. Verify that virtual IP is on the correct network interface. Consider using a pinglist instead of a broadcast ping. |
123096 | ERROR | $msg | Cause: The broadcast ping used to determine the viability of the virtual IP failed. Action: Please ensure that the ping list for this resource is properly configured in the properties panel or that broadcast ping checking is disabled by adding NOBCASTPING=1 to the /etc/default/LifeKeeper configuration file. |
123097 | ERROR | exec_list_ping(): broadcast ping failed. | Cause: The broadcast ping used to determine the viability of the virtual IP failed. Action: Ensure that the ping list for this resource is properly configured in the properties panel or that broadcast ping checking is disabled by adding NOBCASTPING=1 to the /etc/default/LifeKeeper configuration file. |
123101 | ERROR | IP delete failed (ip -$ver addr delete $self->{‘ip’}/$prefix dev $self->{‘device’}), returned $ret | |
123102 | WARN | IP is already configured with wrong netmask: expected %s, found %s. | Cause: IP is configured with wrong netmask. Action: The IP was added outside of LifeKeeper with the wrong netmask. Do not manually start the IP from the command line and verify there is not a ifcfg file in sysconfig that will configure the IP during bootup. |
123299 | ERROR | Unable to open %s. Reason %s | |
123410 | ERROR | Usage error OSUquickCheck | Cause: OSUquickCheck was called without a tag or ID. Action: Call OSUquickCheck with a valid tag or ID. |
123411 | ERROR | OSUquickCheck: both tag and id name not specified | Cause: OSUquickCheck requires either the tag or the resource ID. |
123412 | ERROR | resource $Tag not found on local server | Cause: OSUquickCheck was not able to find the resource for the tag or ID. |
123414 | ERROR | The link for network interface $IPObj->{‘device’} is down | Cause: The link is down and the attempt to bring up the link failed. |
123415 | ERROR | No pinglist found for $IPObj->{‘ipaddr’} | |
123416 | ERROR | List ping test failed for virtual IP $IPObj->{‘ipaddr’} | |
124004 | FATAL | resource tag name not specified | Cause: Invalid arguments were specified for the "quickCheck" operation. Action: Ensure that the correct arguments are passed. |
124005 | FATAL | resource id not specified | Cause: Invalid arguments were specified for the "quickCheck" operation. Action: Ensure that the correct arguments are passed. |
124007 | FATAL | Failed to get resource information | Cause: The filesystem resource’s info field does not contain the correct information. Action: Put the correct information in the resource’s info field or restore the system from a recent "lkbackup" to restore the original info field. |
124008 | ERROR | getId failed | Cause: The filesystem resource could not find the underlying disk device. Action: Check adjacent log messages for further details. Verify that the resource hierarchy is valid and that all required storage kits are installed. |
124009 | ERROR | LifeKeeper protected filesystem is in service but quickCheck detects the following error | Cause: The filesystem kit has found something wrong with the resource. Action: Check the messages immediately following this one for more details. |
124010 | ERROR | \"$id\" is not mounted | Cause: The filesystem resource is no longer mounted. Action: No action is required. Allow local recovery to remount the resource. |
124011 | ERROR | \"$id\" is mounted but with the incorrect mount options (current mount option list: $mntopts, expected mount option list: $infoopts | Cause: The filesystem resource is mounted incorrectly. Action: No action is required. Allow local recovery to remount the resource. |
124012 | ERROR | \"$id\" is mounted but on the wrong device (current mount device: $tmpdev, expected mount device: $dev | Cause: The filesystem resource has the wrong device mounted. Action: No action is required. Allow local recovery to remount the resource. |
124015 | ERROR | LifeKeeper protected filesystem \"$tag\" ($id) is $percent% full ($blocksfree free blocks). | Cause: The filesystem is getting full. Action: Remove or migrate data from the filesystem. |
124016 | WARN | LifeKeeper protected filesystem \"$tag\" ($id) is $percent% full ($blocksfree free blocks). | Cause: The filesystem is getting full. Action: Remove or migrate data from the filesystem. |
124020 | FATAL | cannot find device information for filesystem $id | Cause: The filesystem resource could not find the underlying disk device. Action: Check adjacent log messages for further details. Verify that the resource hierarchy is valid and that all required storage kits are installed. |
124029 | ERROR | Failed to find child resource. | Cause: The filesystem resource could not determine its underlying disk resource. Action: Ensure that the resource hierarchy is correct. |
124032 | FATAL | Script has hung. Exiting.$msg | Cause: Processes had files open on a mounted filesystem that needed to be unmounted. Killing those processes has taken too long. Action: If this error continues, try to temporarily stop all software that may be using the mount point to allow it to be unmounted. If the filesystem still cannot be unmounted, contact Support. |
124042 | ERROR | file system $fs failed unmount$continue | Cause: Processes had files open on a mounted filesystem that needed to be unmounted. It can take multiple attempts to clear those processes. Action: No action is required. Allow the process to continue. |
124046 | ERROR | file system $fsname failed unmount | Cause: A filesystem could not be unmounted. Action: If this error continues, try to temporarily stop all software that may be using the mount point to allow it to be unmounted. If the filesystem still cannot be unmounted, contact Support. |
124049 | ERROR | Local recovery of resource has failed (err=$err | Cause: A filesystem resource has a problem that cannot be repaired locally. Action: No action is required. Allow the resource to be failed over to another system. |
124051 | WARN | getId failed, try count : $cnt/$try | Cause: The file system quickCheck was not able to get the underlying device name for the device resource. Action: Check that the correct device is mounted. |
124052 | ERROR | \"$id\" is mounted but filesystem is shutdown state. | Cause: The xfs_info command failed. Action: Check that the file system is mounted. Check the system error log for errors. |
124054 | ERROR | Failed to change mount option from \"$old_opts\" to \"$new_opts\" for migration | Cause: The mount options were not migrated. Action: Check the error log. |
124103 | ERROR | $ERRMSG Script was terminated for unknown reason | Cause: This message should not occur under normal circumstances. Action: Check adjacent log messages for further details. |
124104 | ERROR | $ERRMSG Required template machine name is null | Cause: Invalid arguments were specified for the canextend operation. Action: Ensure that the arguments are correct. If this error happens during normal operation, please contact Support. |
124105 | ERROR | $ERRMSG Required template resource tag name is null | Cause: Invalid arguments were specified for the canextend operation. Action: Ensure that the arguments are correct. If this error happens during normal operation, please contact Support. |
124106 | ERROR | $ERRMSG Unable to access template resource \"$TemplateTagName\ | Cause: The resource’s underlying disk information cannot be determined. Action: Ensure the hierarchy is correct on the template system before extending. |
124107 | ERROR | $ERRMSG Resource \"$TemplateTagName\" must have one and only one device resource dependency | Cause: The resource has too many underlying devices in the hierarchy. Action: Ensure the hierarchy is correct on the template system before extending. |
124108 | ERROR | $ERRMSG Unable to access template resource \"$TemplateTagName\ | Cause: The resource cannot be found on the template system. Action: Ensure the hierarchy is correct on the template system before extending. |
124109 | ERROR | $ERRMSG Can not access canextend for scsi/$DeviceResType resources on machine \"$TargetSysName\ | Cause: The target system is missing some required components. Action: Ensure that the target system has all the correct kits installed and licensed. |
124110 | ERROR | $ERRMSG Either filesystem \"$TemplateLKId\" is not mounted on \"$TemplateSysName\" or filesystem is not shareable with \"$TargetSysName\ | Cause: The filesystem isn’t in service on the template system or doesn’t meet the requirements for extending to the target system. Action: Make sure the resource is in service on the template system and review the product documentation regarding the requirements for extending filesystems. |
124111 | ERROR | $ERRMSG File system type \"${FSType}\" is not supported by the kernel currently running on \"${TargetSysName}\ | Cause: The filesystem’s type cannot be mounted on the target system due to lack of kernel support. Action: Ensure that the target system has all its kernel modules configured correctly before extending the resource. |
124112 | ERROR | must specify machine name containing primary hierarchy | Cause: Invalid arguments were specified for the creFShier operation. Action: If this error happens during normal operation, please contact Support. |
124113 | ERROR | must specify primary ROOT tag | Cause: Invalid arguments were specified for the creFShier operation. Action: If this error happens during normal operation, please contact Support. |
124114 | ERROR | must specify primary mount point | Cause: Invalid arguments were specified for the creFShier operation. Action: If this error happens during normal operation, please contact Support. |
124115 | ERROR | must specify primary switchback type | Cause: Invalid arguments were specified for the creFShier operation. Action: If this error happens during normal operation, please contact Support. |
124118 | ERROR | dep_remove failure on machine \""$PRIMACH"\" for parent \"$PRITAG\" and child \"$DEVTAG.\ | Cause: Cleanup after a dependency creation failed. Action: Check adjacent log messages for further details. |
124119 | ERROR | ins_remove failure on machine \""$PRIMACH"\" for \"$PRITAG.\ | Cause: Cleanup after an instance creation failed. Action: Check adjacent log messages for further details. |
124121 | ERROR | ins_remove failure on machine \""$PRIMACH"\ | Cause: Cleanup after a resource creation failed. Action: Check adjacent log messages for further details. |
124122 | ERROR | $ERRMSG Script was terminated for unknown reason | Cause: This message should not occur under normal circumstances. Action: Check adjacent log messages for further details. |
124123 | ERROR | $ERRMSG Required template machine name is null | Cause: Invalid arguments were specified for the depstoextend operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124124 | ERROR | $ERRMSG Required template resource tag name is null | Cause: Invalid arguments were specified for the depstoextend operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124125 | ERROR | $ERRMSG Unable to access template resource \"$TemplateTagName\ | Cause: The resource was unable to locate its underlying disk resource. Action: Ensure the hierarchy and all dependencies are correct before extending. |
124126 | ERROR | unextmgr failure on machine \""$PRIMACH"\ | Cause: The cleanup, after a failed resource extend operation, failed. Action: Manually clean up any remaining resources and check adjacent log messages for further details. |
124128 | ERROR | unextmgr failure on machine \""$PRIMACH"\" for \"$PRITAG.\ | Cause: The cleanup, after a failed resource extend operation, failed. Action: Manually clean up any remaining resources and check adjacent log messages for further details. |
124129 | ERROR | $ERRMSG Script was terminated for unknown reason | Cause: This message should not occur under normal circumstances. Action: Look for additional log messages for more details. |
124130 | ERROR | $ERRMSG Required template machine name is null | Cause: Invalid arguments were specified for the extend operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124131 | ERROR | $ERRMSG Required template resource tag name is null | Cause: Invalid arguments were specified for the extend operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124132 | ERROR | $ERRMSG Required target mount point is null | Cause: Invalid arguments were specified for the extend operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124133 | ERROR | $ERRMSG Unable to access template resource \"$TemplateTagName\ | Cause: The tag being extended doesn’t exist on the template system. Action: Ensure that the hierarchy is correct on the template system before extending. |
124134 | ERROR | $ERRMSG Detected conflict in expected tag name \"$TargetTagName\" on target machine. | Cause: A resource already exists on the target system with the same tag as the resource being extended. Action: Recreate one of the conflicting resources with a different tag. |
124135 | ERROR | $ERRMSG Resource \"$TemplateTagName\" does not have required device resource dependency or unable to access this resource on template machine. | Cause: The resource or its underlying disk resource cannot be found on the template system. Action: Ensure that the hierarchy is correct on the template system before extending. |
124136 | ERROR | $ERRMSG Resource \"$TemplateTagName\" must have one and only one device resource dependency | Cause: The resource has multiple underlying devices in the hierarchy on the template system. Action: Ensure the hierarchy is correct before extending and that the filesystem resource only depends on a single disk resource. |
124137 | ERROR | $ERRMSG Can not access extend for scsi/$DeviceResType resources on machine \"$TargetSysName\ | Cause: The files required to support the given storage type aren’t available on the target system. Action: Ensure that the required kits are installed on the target system and licensed. |
124138 | ERROR | $ERRMSG Unable to access target device resource \"$DeviceTagName\" on machine \"$TargetSysName\ | Cause: The required underlying disk resource doesn’t exist on the target system. Action: Check adjacent log messages for further details and ensure that the target system is properly configured for hosting the resources being extended. |
124141 | ERROR | $ERRMSG Unable to find mount point \"$TemplateLKId\" mode on template machine | Cause: The details of the mount point on the template system cannot be determined. Action: Ensure that the resource is in service and accessible on the template system before extending. |
124142 | ERROR | $ERRMSG Unable to create or access mount point \"$TargetLKId\" on target machine | Cause: The mount point could not be created on the target system. Action: Ensure that the mount point’s parent directory exists and is accessible on the target system. |
124143 | ERROR | $ERRMSG Two or more conflicting entries found in /etc/fstab on \"$TargetSysName\ | Cause: The device or mount point appears to be mounted more than once on the target system. Action: Ensure that the mount point is not mounted on the target system before extending. |
124144 | ERROR | $ERRMSG Failed to create resource instance on \"$TargetSysName\ | Cause: The resource creation on the target system failed. Action: Check adjacent log messages for further details. Make sure to check the logs on the target server. |
124145 | ERROR | $ERRMSG Failed to set resource instance state for \"$TargetTagName\" on \"$TargetSysName\ | Cause: The source state could not be changed to OSU on the target system. Action: Check adjacent log messages for further details. |
124146 | ERROR | must specify machine name containing primary hierarchy | Cause: Invalid arguments were specified for the filesyshier operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124147 | ERROR | must specify primary mount point | Cause: Invalid arguments were specified for the filesyshier operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124149 | ERROR | create file system hierarchy failure | Cause: The process of finding the resource instance failed. Action: Check adjacent log messages for further details. |
124150 | ERROR | create file system hierarchy failure | Cause: The system failed to read the /etc/mtab file. Action: Check adjacent log messages for further details. |
124152 | ERROR | create file system hierarchy failure | Cause: The underlying disk resource could not be found. Action: Check adjacent log messages for further details. |
124153 | ERROR | create file system hierarchy failure | Cause: Creating the filesystem resource failed. Action: Check adjacent log messages for further details. |
124154 | ERROR | create file system hierarchy failure | Cause: The info field for the resource could not be updated. Action: Check adjacent log messages for further details. |
124155 | ERROR | create file system hierarchy failure | Cause: The switchback strategy could not be set on the resource. Action: Check adjacent log messages for further details. |
124157 | ERROR | create file system hierarchy failure \(conflicting entries in /etc/fstab\ | Cause: The mount point could not be removed from the /etc/fstab file. Action: Check adjacent log messages for further details. |
124160 | ERROR | Unknown error in script filesysins, err=$err | Cause: This message should not occur under normal circumstances. Action: Check adjacent log messages for further details. |
124161 | ERROR | create filesys instance – existid – failure | Cause: This message should not occur under normal circumstances. Action: Check adjacent log messages for further details. |
124163 | ERROR | create filesys instance – ins_list – failure | Cause: Checking for an existing resource failed. Action: Check adjacent log messages for further details. |
124164 | ERROR | create filesys instance – newtag – failure | Cause: The system failed to generate a suggested tag for the resource. Action: If this error happens during normal operation, contact Support. |
124168 | ERROR | create filesys instance – ins_create – failure | Cause: The filesystem resource could not be created. Action: Check adjacent log messages for further details. |
124169 | ERROR | filesys instance – ins_setstate – failure | Cause: The new filesystem resource’s state could not be initialized. Action: Check adjacent log messages for further details. |
124173 | ERROR | create filesys instance – dep_create – failure | Cause: The resource’s dependency relationship with its underlying disk could not be created. Action: Check adjacent log messages for further details. |
124174 | ERROR | machine not specified | Cause: Invalid arguments were specified for the rmenu_mp operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124175 | ERROR | mount point not specified | Cause: Invalid arguments were specified for the rmenu_mp operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124177 | ERROR | unexpected multiple matches found | Cause: One or more systems show a filesystem or mount point used more than once. Action: Verify filesystem devices and mount points and ensure that filesystems are only mounted once. Look for additional log messages for more details. |
124178 | ERROR | machine name not specified | Cause: Invalid arguments were specified for the rmenump operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124180 | ERROR | must specify filesystem type | Cause: Invalid arguments were specified for the validfstype operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124181 | ERROR | mount point not specified | Cause: Invalid arguments were specified for the validmp operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124182 | ERROR | The mount point $MP is not an absolute path | Cause: A mount point was specified that isn’t an absolute path (doesn’t start with a ‘/’). Action: Specify a mount point as an absolute path starting with a ‘/’. |
124183 | ERROR | $MP is already mounted on $MACH | Cause: The requested mount point is already in use on the system. Action: Specify a mount point that isn’t in use or unmount it before retrying the operation. |
124184 | ERROR | The mount point $MP is already protected by LifeKeeper on $MACH | Cause: The system is already protecting the specified mount point. Action: Choose a different mount point that isn’t already being protected. |
124185 | ERROR | The mount point $MP is not a directory on $MACH | Cause: The mount point refers to a non-directory such as a regular file. Action: Choose a mount point that refers to a directory. |
124186 | ERROR | The mount point directory $MP is not empty on $MACH | Cause: The specified mount point refers to a directory that isn’t empty. Action: Choose a mount point that is empty or remove the contents of the specified directory before retrying the operation. |
124187 | ERROR | server name not specified | Cause: Invalid arguments were specified for the valuepmp operation. Action: Ensure the script is called correctly. If this error happens during normal operation, please contact Support. |
124188 | ERROR | There are no mount points on server $MACH | Cause: There are no possible mount points for filesystem resource on the server. Action: Check adjacent log messages for further details. |
124194 | WARN | Please correct conflicting \"/etc/fstab\" entries on server $UNAME for: $FSDEV, $FSNAME | Cause: After deleting a filesystem resource, some entries in /etc/fstab need to me manually cleaned up. Action: Manually clean up the /etc/fstab file. |
124195 | ERROR | getchildinfo found no $OKAPP child for $PTAG | Cause: The system could not find a child resource in the hierarchy. Action: Check adjacent log messages for further details and ensure that the hierarchy is correct before retrying the operation. |
124196 | ERROR | enablequotas – quotacheck may have failed for $FS_NAME | Cause: The quota operation failed. Action: Check adjacent log messages for further details in both the lifekeeper log and in /var/log/messages. |
124198 | ERROR | enablequotas – quotaon failed to turn on quotas for $FS_NAME, reason | Cause: The quota operation failed. Action: Check adjacent log messages for further details in both the lifekeeper log and /var/log/messages. |
124200 | ERROR | The device node $dev was not found or did not appear in the udev create time limit of $delay seconds | Cause: A device node (/dev/…) was not created by udev. This may indicate an issue with the storage or the server’s connection to the storage. Action: Check adjacent log messages for further details in both the lifekeeper log and in /var/log/messages. |
124201 | WARN | Device $device not found. Will retry wait to see if it appears. | Cause: This can happen under normal conditions while udev creates device node entries for storage. This message should not happen repeatedly. Action: Check adjacent log messages for further details in both the lifekeeper log and in /var/log/messages. |
124202 | ERROR | Command \"$commandwithargs\" failed. Retrying …. | Cause: The given command failed but may have failed temporarily. This failure may happen during normal operations but should not keep failing. Action: Check adjacent log messages for further details if this message continues. |
124204 | WARN | cannot make file system $FSNAME mount point | Cause: The mount point directory could not be created. Action: Ensure that the mount point can be created. This may be due to filesystem permissions, mount options, etc. |
124207 | ERROR | \"fsck\"ing file system $FSNAME failed, trying alternative superblock | Cause: This message indicates that the typical filesystem check failed. This message may be ok for ext2 filesystems or other filesystems where an alternative superblock location is used. Action: Check adjacent log messages for further details. |
124209 | ERROR | \"fsck\"ing file system $FSNAME with alternative superblock failed | Cause: This indicates that an ext2 filesystem (or other filesystem where an alternative superblock location is used) check failed with the alternative superblock location. Action: Check adjacent log messages for further details and instructions on how to proceed. |
124210 | WARN | POSSIBLE FILESYSTEM CORRUPTION ON $FSNAME ($FPNAME | Cause: A filesystem was put in service or failed over when it was out of sync with its mirror source. Action: Check adjacent log messages for further details and review the product documentation for information on how to bring the filesystem in service safely. |
124211 | ERROR | Reason for fsck failure ($retval): $ret | Cause: This log message is part of a series of messages and gives the actual exit code from the fsck process. Action: Check adjacent log messages for further details and instructions on how to proceed. |
124212 | ERROR | \"fsck\" of file system $FSNAME failed | Cause: The check of the filesystem failed. This is usually due to the filesystem having corruption. Action: Check adjacent log messages for further details. Review the product documentation for instructions on how to handle possible filesystem corruption. |
124213 | WARN | POSSIBLE FILESYSTEM CORRUPTION ON $FSNAME ($FPNAME | Cause: The system or user tried to bring into service a filesystem that may be corrupted. This can happen if a filesystem is switched or failed over when it was out of sync with its mirror source. Action: Check adjacent log messages for further details and review the product documentation for instructions on how to bring the resource into service safely. |
124214 | ERROR | Reason for fsck failure ($retval) | Cause: This message should follow a previous log message about a filesystem check failure and gives the process exit code of the fsck process. Action: Check adjacent log messages for further details. |
124218 | ERROR | File system $FSNAME was found to be already | Cause: This message is part of a series of messages. Action: Check adjacent log messages for further details. |
124219 | ERROR | mounted after initial mount attempt failed. | Cause: This message is part of a series of messages. This should not happen under normal circumstances but may not be fatal if the resource can be put in service. Action: Check adjacent log messages for further details in both the lifekeeper log and in /var/log/messages. |
124220 | ERROR | File system $FSNAME failed to mount. | Cause: The filesystem could not be mounted. Action: Check adjacent log messages for further details. |
124221 | WARN | Protected Filesystem $ID is full | Cause: The filesystem is full. Action: Remove unused data from the filesystem or migrate to a larger filesystem. |
124222 | WARN | Dependent Applications may be affected <> | Cause: This indicates that an operation on a resource is likely to cause operation on other resources based on the resource hierarchy. Action: Make sure it’s acceptable for the indicated resources to be affected before continuing. |
124223 | ERROR | Put \"$t\" Out-Of-Service Failed By Signal | Cause: This message should not occur under normal circumstances. Action: Check adjacent log messages for further details. |
124227 | ERROR | Put \"$i\" Out-Of-Service Failed | Cause: The operation failed. Action: Check adjacent log messages for further details. |
124230 | ERROR | Put \"$t\" In-Service Failed By Signal | Cause: This message should not occur under normal circumstances. Action: Check adjacent log messages for further details. |
124231 | ERROR | Put \"$t\" In-Service Failed | Cause: The operation failed. Action: Check adjacent log messages for further details. |
124234 | ERROR | Put \"$t\" In-Service Failed | Cause: The operation failed. Action: Check adjacent log messages for further details. |
124239 | WARN | Cause: Invalid subdirectory cache most likely caused by mounting a subdirectory of an NFSv3 export. Action: Use NFSv4 and do not mount subdirectories. |
|
125102 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $TemplateTagName $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125103 | ERROR | `printf ‘%s is not shareable with any machine.’ $DEV` | Cause: The device does not appear to be shared with any other systems. Action: Verify that the device is accessible from all servers in the cluster. Ensure that all relevant storage drivers and software are installed and configured properly. |
125104 | ERROR | `printf ‘Failed to create disk hierarchy for "%s" on "%s"’ $PRIMACH $DEV` | Cause: The creation of a resource to protect a physical disk failed. Action: Check adjacent log messages for more details and try to resolve the reported problem. |
125107 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $TemplateTagName $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125114 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $TemplateTagName $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125120 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $TemplateTagName $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125123 | ERROR | `printf ‘Cannot access depstoextend script "%s" on server "%s"’ $depstoextend $TargetSysName` | Cause: LifeKeeper was unable to run pre-extend checks on the resource hierarchy because it was unable to find the script "DEPSTOEXTEND" on {server}. Action: Check your LifeKeeper configuration. |
125126 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $ChildTag $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125129 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $TemplateTagName $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125155 | ERROR | SCSI $DEV failed to lock. | Cause: There was a problem locking a SCSI device. Action: Check adjacent log messages for more details and try to resolve the reported problem. |
125164 | ERROR | SCSI $INFO failed to unlock. | Cause: There was a problem unlocking a SCSI device. Action: Check adjacent log messages for more details and try to resolve the reported problem. |
125181 | ERROR | `printf ‘Template resource "%s" on server "%s" does not exist’ $TemplateTag $TemplateSysName` | Cause: LifeKeeper was unable to find the resource {tag} on {server}. |
125194 | ERROR | Failed to check disk.(tag=\"$opt_t\" | Cause: The device for the specified tag failed to respond properly. Action: Verify the disk is responding correctly. |
125197 | ERROR | usage error: $cmd [-q|-v] [-r|-w] [-a|-d <disk device>] | Cause: Incorrect parameters were passed to the $cmd. Action: Verify the correct parameters are listed in the command line. Most likely the disk device is not found causing this to be left off the command. |
125200 | ERROR | getId command does not exist. id=$id, app=$app, typ=$typ | Cause: The getID command was not found in /opt/LifeKeeper/lkadm/subsys/<app>/<typ>/bin where <app> and <typ> are included in the error message. Action: Check that getId is installed properly and has execute permissions. |
125201 | ERROR | The block device given by the -d option is empty. | Cause: The -d argument requires a block device node. Action: Retry the command with a valid block device node. |
125204 | ERROR | Could not set to RO/RW: $err_disks | Cause: The blockdev(8) command failed to set read only or read write. Action: Additional information may be logged in the system error log. |
126105 | ERROR | script not specified – $PTH is a directory | Cause: The specified script path is a directory. Action: Correct the path of the script. |
126110 | ERROR | script $PTH does not exist | Cause: The specified script path does not exist. Action: Correct the path of the script. |
126115 | ERROR | script $PTH is a zero length file | Cause: The specified script is an empty file. Action: Correct the script’s file path and check the contents inside the script. |
126117 | ERROR | script $PTH is not executable | Cause: The specified script is not executable. Action: Correct the script’s file path, check the contents inside the script file and make sure it has the proper execute permissions. |
126125 | ERROR | required template machine name is null | Cause: The input template machine name is null. Action: Correct the input template machine name. |
126130 | ERROR | required template resource tag name is null | Cause: The input template resource {tag} is null. Action: Correct the input template resource tag name. |
126135 | ERROR | Unable to generate a new tag | Cause: Failed to generate a new tag as the same as the template tag name on the target node using the "newtag" script during the extension. The tag name is already existing. Action: Avoid using duplicate tag name on the same node and check the log for detail. |
126140 | ERROR | Unable to generate a new tag | Cause: Failed to generate a new tag as input target tag name on the target node using the "newtag" script during the extension. The tag name is already existing. Action: Avoid using duplicate tag name on the same node and check log for detail. |
126150 | ERROR | unable to remote copy template \"$_lscript\" script file | Cause: Failed to remote copy template script file. The cause may be due to the non-existence/availability of template script file on template node or any transaction failure during "lcdrcp" process. Action: Check the existence/availability of template script and the connection to template node. Also check the logs for related errors and try to resolve the reported problem. |
126155 | ERROR | failed to create resource instance on \"$TargetSysName\ | Cause: Failed to create resource instance using "ins_create". Action: Check the logs for related errors and try to resolve the reported problem. |
126160 | ERROR | failed to set resource instance state for \"$TargetTagName\" on \"$TargetSysName\ | Cause: Failed to set resource instance state using "ins_setstate" during GenApp resource extension. Action: Check the logs for related errors and try to resolve the reported problem. |
126170 | ERROR | getlocks failure | Cause: Failed to get the administrative lock when creating a resource hierarchy. Action: Check the logs for related errors and try to resolve the reported problem. |
126175 | ERROR | instance create failed | Cause: Failed to create a GenApp instance using "appins". Action: Check the logs for related errors and try to resolve the reported problem. |
126180 | ERROR | unable to set state to OSU | Cause: Failed to set resource instance state using "ins_setstate" during GenApp resource creation. Action: Check the logs for related errors and try to resolve the reported problem. |
126190 | ERROR | resource restore has failed | Cause: Failed to restore GenApp resource. Action: Check the logs for related errors and try to resolve the reported problem. |
126200 | ERROR | create application hierarchy rlslocks failure | Cause: Failed to release lock after GenApp resource created. Action: Check the logs for related errors and try to resolve the reported problem. |
126210 | ERROR | copy $ltype script $lscript failure | Cause: Failed to copy user provided script to appropriate GenApp directory during resource creation. Action: Check the existence/availability of user provided script and the GenApp directory as well. Also check the logs for related errors and try to resolve the reported problem. |
126215 | ERROR | no $ltype script specified | Cause: Missing user defined script during GenApp resource creation. Action: Check the input action script and run resource creation again. |
126220 | ERROR | no machine name specified | Cause: Missing specified machine name during GenApp resource creation. Failed to copy specified user script due to missing the input for machine name. Action: Check the input for machine name and run resource creation again. |
126225 | ERROR | no resource tag specified | Cause: Missing specified tag name during resource creation. Action: Check the input for source tag name and run resource creation again. |
126230 | ERROR | $ERRMSG Script was terminated for unknown reason | Cause: Failed to extend GenApp resource due to unknown reason. Action: Check the logs for related errors and try to resolve the reported problem. |
126235 | ERROR | $ERRMSG Required template machine name is null | Cause: Missing the input for template machine name during GenApp resource extension. Action: Check the input for template machine name and do the resource extension again. |
126240 | ERROR | $ERRMSG Required template resource tag name is null | Cause: Missing the input for template resource tag name during GenApp resource extension. Action: Check the input for template resource tag name and do the resource extension again. |
126245 | ERROR | $ERRMSG Can not access extend for $AppType/$ResType resources on machine \"$TargetSysName\ | Cause: Failed to locate "extend" script during GenApp resource extension on target node. Action: Check the existence/availability of "extend" script and do GenApp resource extension again. |
126250 | ERROR | create application failure – ins_list failed | Cause: Failed when calling "ins_list" during GenApp resource creation. Action: Check the logs for related errors and try to resolve the reported problem. |
126255 | ERROR | create application failure – unable to generate a new tag | Cause: Failed to generate a new tag during the GenApp resource creation. Action: Avoid using duplicate tag name on the same node. Also check the logs for related errors and try to resolve the reported problem. |
126270 | ERROR | create application failure – ins_create failed | Cause: Failed using "ins_create" to create GenApp instance. Action: Check the logs for related errors and try to resolve the reported problem. |
126275 | ERROR | create application failure – copy_actions failed | Cause: Failed using "copy_actions" to copy user specified template script file. Action: Check the existence/availability of template script. Also check the logs for related errors and try to resolve the reported problem. |
126290 | ERROR | Unable to obtain tag for resource with id $ID | Cause: Failed to fetch GenApp resource tag name by input ID during recovery. Action: Check the correctness of input ID and existance/availability of GenApp resource in LCD. Also check the logs for related errors and try to resolve the reported problem. |
126300 | ERROR | generic application recover script for $TAG was not found or was not executable | Cause: Failed to locate the user defined script for GenApp resource during recovery. Action: Check the existence/availability of the user defined script and do the GenApp recovery process again. |
126310 | ERROR | -t flag not specified | Cause: Missing the input for resource tag name during GenApp resource restore. Action: Check the input for resource tag name and do resource restore again. |
126315 | ERROR | -i flag not specified | Cause: Missing the input for resource internal id during GenApp resource restore. Action: Check the input for resource internal id and do resource restore again. |
126327 | ERROR | END timeout restore of \"$TAG\" (forcibly terminating | Cause: The restore did not finish in the specified timeout value. Action: Increase the timeout value or resolve why the restore exceeded the timeout. |
126335 | ERROR | restore script \"$LCDAS/$APP_RESTOREDIR/$TAG\" was not found or is not executable | Cause: Failed to locate the user defined script for GenApp resource during restore. Action: Check the existence/availability of the user defined script and do the GenApp restore process again. |
126340 | ERROR | -t flag not specified | Cause: Missing the input for resource tag name during GenApp resource remove. Action: Check the input for resource tag name and do resource remove again. |
126345 | ERROR | -i flag not specified | Cause: Missing the input for resource internal id during GenApp resource remove. Action: Check the input for resource internal id and do resource remove again. |
126357 | ERROR | END timeout remove of \"$TAG\" (forcibly terminating | Cause: The remove did not finish in the specified timeout value. Action: Increase the timeout value or resolve why the remove exceeded the timeout. |
126365 | ERROR | remove script \"$LCDAS/$APP_REMOVEDIR/$TAG\" was not found or was not executable | Cause: Failed to locate the user defined script for GenApp resource during remove. Action: Check the existence/availability of the user defined script and do the GenApp remove process again. |
126375 | ERROR | Script has hung checking \"$tag\". Forcibly terminating. | Cause: The "quickCheck" Script will be forcibly terminated for GenApp resource with tag name {tag} due to a waiting time over the user defined timeout. Action: Check the GenApp resource performance and restart quickChecking. Also check the logs for related errors and try to resolve the reported problem. |
126380 | ERROR | Usage error: no tag specified | Cause: Missing the input for resource tag name during GenApp resource quickCheck. Action: Check the input for resource tag name and retry resource quickCheck. |
126385 | ERROR | Internal error: ins_list failed on $tag. | Cause: Failed using "ins_list" to fetch the GenApp resource information by input tag name during the quickCheck process. Action: Correct the input tag name and do the quickCheck process again. Also check the logs for related errors and try to resolve the reported problem. |
126390 | FATAL | Failed to fork process to execute $userscript: $! | Cause: Failed to fork process to execute user defined "quickCheck" script during the GenApp resource "quickCheck" process. Action: Check the existency/availability of the user defined "quickCheck" script and do the "quickCheck" process again. |
126391 | ERROR | quickCheck has failed for \"$tag\". Starting recovery. | Cause: The GenApp resource with tag name {tag} is determined to be failed by using the user defined health monitoring script – "quickCheck" and the recovery process will be initiated. Action: Check the performance of the GenApp resource when local recovery finished. Also check the logs for related errors and try to resolve the reported problem. |
126392 | WARN | ${convtag}_TIMEOUT: This parameter is old. This parameter will not be supported soon. | Cause: The timeout for the gen/app resource using <tag>_TIMEOUT is being used. Action: To specify a timeout for a gen/app quickCheck script use the format <tag>_QUICKCHECK_TIMEOUT. |
126400 | ERROR | -t flag not specified | Cause: Missing the input for resource tag name during GenApp resource deletion process. Action: Check the input for resource tag name and do resource deletion process again. |
126405 | ERROR | -i flag not specified | Cause: Missing the input for resource internal id during GenApp resource deletion process. Action: Check the input for resource internal id and do resource deletion process again. |
126478 | ERROR | Failed to create tag \’$new_leaf\’. | Cause: The ‘creapphier’ utility failed to create the specified tag. Action: Check /var/log/lifekeeper.log for additional messages from ‘creapphier’. |
126479 | ERROR | Failed to extend tag \’$new_leaf\’. | Action: Check /var/log/lifekeeper.log for additional errors from extend manager. |
126481 | ERROR | Failed to create dependency on \’$sys\’ for \’$new_leaf\’ to \’$hier{$leaf}{$sys}{‘Tag’}\’. | Action: Check /var/log/lifekeeper.log for errors from the ‘dep_create’ function |
126484 | ERROR | Tag \’$root_tag\’ is not in-service. | Cause: The specified tag is not in-service on any node in the cluster. Action: Bring the specified tag in-service on any node in the cluster and re-run ‘create_terminal_leaf’. |
126485 | ERROR | Tag \’$root_tag_1\’ was not found, select the root tag for a hierarchy to add a terminal leaf resource. | Cause: The first tag passed to ‘create_terminal_leaf’ was not found on the system where the utility was run. Action: Verify each resource is in-service on a node in the cluster and fully extended to all nodes. |
126486 | ERROR | Unable to create leaf tag from \’$tag\’. | Cause: A unique terminal leaf tag could not be created. A unique terminal leaf tag could not be created after 100 tries. Action: Check for multiple leaf tags and for errors in /var/log/lifekeeper.log that may indicate the problem. |
126487 | ERROR | Tag \’$root_tag_2\’ was not found, select the root tag for a hierarchy to add a terminal leaf resource. | Cause: The second tag passed to ‘create_terminal_leaf’ was not found on the system where the utility was run. Action: Verify the resource is in-service on a node in the cluster and fully extended to all nodes. |
126488 | ERROR | Tag \’$root_tag_1’ is not extended to 3 or more systems. | Cause: The specified tag is not extended to 3 or more nodes. Action: Extend the specified tag to at least 3 nodes and retry ‘create_terminal_leaf’. |
126489 | ERROR | $cmd does not support SDRS resources. | Cause: A multi-site configuration was detected. Action: none |
126492 | ERROR | Remove resource $tag failed. | Cause: The create_terminal_leaf utility cleanup after a failure did not properly remove a resource. Action: Remove the tag listed if it still exists. |
126494 | ERROR | Delete dependency failed on \’$sys\’ for \’$tag\’ to \’$parent\’. | Cause: The create_terminal_leaf utility cleanup after a failure did not properly delete a dependency. Action: Delete the dependency if it still exists. |
126495 | ERROR | New tag \’$new_tag\’ was modified during create, expected \’$new_leaf\’. | Cause: There was a conflict with the tag name. Action: Select a tag name that does not have a conflict. |
126496 | ERROR | Tag \’$root_tag_1\’ is not in-service. | Cause: The root tag listed must be in-service to create the terminal leaf. Action: Bring the root tag in-service and retry the operation. |
126497 | ERROR | Tag \’$root_tag_2\’ is not in-service. | Cause: The root tag listed must be in-service to create the terminal leaf. Action: Bring the root tag in-service and retry the operation. |
126498 | ERROR | Tag \’$root_tag_1\’ and \’$root_tag_2’ are not extended to same systems. | Cause: Both hierarchies specified to create terminal leaf resource must exist on the same systems. Action: Create and extend the hierarchies on the same systems. |
126501 | ERROR | Resource ‘$tag’ not found on server $me. | |
126502 | ERROR | Resource ‘$tag’ has resource type $app:$typ, expected gen:app. | |
126503 | ERROR | Invalid action script ‘$new_script_path’. | |
126504 | ERROR | The ‘$script_type’ action script is required for resource ‘$tag’ and cannot be deleted. | |
126507 | ERROR | Failed to $action the ‘$script_type’ action script for resource ‘$eqv_tag’ on server $eqv_sys. | |
126510 | ERROR | Usage: $usage | |
128005 | ERROR | END failed %s of "%s" on server "%s" due to a "%s" signal | Cause: The quickCheck of {resource} on {server} failed due to an operating system signal {signal}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128008 | ERROR | Usage: quickCheck -t <tag name> -i <id> | Cause: Incorrect arguments have been supplied to the dmmp device quickCheck command preventing it from running. Action: Make sure all software components are properly installed and at the correct version. Rerun the command and supply the correct argument list: -t <Resource Tag> and -i <Resource ID> that identifies the dmmp device resource to be quickChecked. |
128010 | ERROR | quickCheck for "%s" failed checks of underlying paths, initiate recovery. retry count=%s. | Cause: The dmmp kit failed to quickCheck a device after {count} times of retries. A recovery of the protected dmmp resource will be executed. Action: Check adjacent log messages for further details and related messages. |
128021 | ERROR | unable to find device for uuid "%s". | Cause: The device could not be found by unique id during a restore operation. Action: Verify that the resource is configured properly. Rerun the command and supply the correct device id that identifies the dmmp device resource to be restored. |
128025 | ERROR | Device "%s" failed to unlock. | Cause: A non working {device} was detected and could not be unlocked during the restore operation. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128026 | ERROR | Device "%s" failed to lock. | Cause: The {device} could not be locked during the resotre. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128031 | ERROR | unable to find device for uuid "%s". | Cause: The device could not be found by unique id during the remove operation. Action: Verify that the resource is configured properly. Rerun the command and supply the correct device id that identifies the dmmp device resource to be removed. |
128034 | ERROR | Device "%s" failed to unlock. | Cause: The {device} could not be unlocked during the remove. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128036 | ERROR | unable to load existing information for device with uuid "%s". | Cause: The device information could not be loaded by unique id. Action: Make sure the resource is configured properly. Rerun the command and supply the correct device id that identifies the dmmp device resource. |
128037 | ERROR | unable to load existing information for device "%s". | Cause: The device information could not be loaded by name. Action: Make sure the resource is configured properly. Rerun the command and supply the correct device name that identifies the dmmp device resource. |
128038 | ERROR | unable to load existing information for device, no dev or uuid defined. | Cause: The device information could not be loaded since neither a unique device id nor name of the device were defined. Action: Make sure the resource is configured properly. Rerun the command and supply the correct device id or name that identifies the dmmp device resource. |
128041 | ERROR | unable to load existing information for device with uuid "%s". | Cause: The device information could not be loaded by unique id. Action: Make sure the resource is configured properly. Rerun the command and supply the correct device id that identifies the dmmp device resource. |
128057 | ERROR | All paths are failed on "%s". | Cause: LifeKeeper detected all paths listed to the protected dmmp device are in the failed state. Action: Check adjacent log messages for further details and related messages. |
128058 | ERROR | could not determine registrations for "%s"! All paths failed. | Cause: LifeKeeper could not determine registrations for protected dmmp {device}. All paths to the dmmp {device} are in the failed state. Action: Check adjacent log messages for further details and related messages. |
128059 | WARN | path "%s" no longer configured for "%s", remove from path list. | Cause: LifeKeeper detected listed {path} to protected {device} is not valid anymore and will remove it from the path list. Action: Check adjacent log messages for further details and related messages. |
128060 | WARN | registration failed on path "%s" for "%s". | Cause: LifeKeeper failed the registration on {path} for protected dmmp {device}. Action: Check adjacent log messages for further details and related messages. |
128062 | ERROR | all paths failed for "%s". | Cause: LifeKeeper failed to verify a valid path to protected dmmp {device}. Action: Check adjacent log messages for further details and related messages. |
128072 | ERROR | The daemon "%s" does not appear to be running and could not be restarted. Path failures may not be correctly handled without this daemon. | Cause: LifeKeeper failed to verify dmmp daemon is running and could not restart it. Action: Check adjacent log messages for further details and related messages. |
128078 | ERROR | "%s" resource type is not installed on "%s". | Cause: The Device Mapper Multipath Recovery Kit for dmmp device support is not installed on the system. Action: Install the steeleye-lkDMMP Device Mapper Multipath Recovery Kit rpm on the system. |
128083 | ERROR | This script must be executed on "%s". | Cause: An incorrect system name has been supplied as an argument to the devicehier script used to create the dmmp device resource. Action: Make sure the cluster nodes and comm-paths are properly configured. Supply the correct system name to the devicehier script. The name must match the name of the system on which the command is run. |
128084 | ERROR | The device %s is not active. | Cause: LifeKeeper failed to find the specified {device} as a valid device on the system during resource creation. Action: Check adjacent log messages for further details and related messages. Rerun the command and supply the correct argument list: -t <Resource Tag> and -i <Resource ID> that identifies the dmmp device resource to be created. |
128086 | ERROR | Failed to create "%s" hierarchy. | Cause: LifeKeeper failed to create resource hierarchy for {device}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128088 | ERROR | Error creating resource "%s" on server "%s" | Cause: LifeKeeper failed to create the resource with {tagname} on {server}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128090 | ERROR | Failed to create dependency "%s"-"%s" on system "%s". | Cause: LifeKeeper failed to create dependency {resource tag name} – {resource tag name} on {system} during creation. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128091 | ERROR | Error creating resource "%s" on server "%s" | Cause: LifeKeeper failed to create {resource} on {system}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128101 | ERROR | "%s" constructor requires a valid argument. | Cause: LifeKeeper failed to create an object for the dmmp resouce during construction. Action: Rerun the command and supply the correct argument list: -t <Resource Tag> and -i <Resource ID> that identifies the dmmp device resource. |
128102 | ERROR | Invalid tag "%s". | Cause: A resource instance could not be found for the given tag name. Action: Make sure the resource is configured properly. Rerun the command and supply the correct argument list: -t <Resource Tag> and -i <Resource ID> that identifies the dmmp device resource. |
128111 | ERROR | Failed to get registrations for "%s": %s. Verify the storage supports persistent reservations. | Cause: LifeKeeper failed to get the registrations of {device} with the message, "bad field in Persistent reservation in cdb". Action: Verify if the storage supports persistent reservations. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128112 | ERROR | Failed to get registrations for "%s": %s. Verify the storage supports persistent reservations. | Cause: LifeKeeper failed to get the registrations of {device} with the message, "illegal request". Action: Verify if the storage supports persistent reservations. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128136 | ERROR | A previous quickCheck with PID "%s" running for device "%s" has been terminated. | Cause: LifeKeeper detected that a previous quickCheck operation is still running during the dmmp resource restore operation. It has been terminated by LifeKeeper. Action: Check adjacent log messages for further details and related messages. |
128137 | ERROR | SCSI reservation conflict on %s during LifeKeeper resource initialization. Manual intervention required. | Cause: LifeKeeper detected a SCSI reservation conflict on {device} during dmmp resource restore. Action: Check adjacent log messages for further details and related messages. Manual intervention and fix of the reservation conflict on {device} is required. |
128138 | ERROR | unable to clear registrations on %s. | Cause: LifeKeeper failed to clear all the registrations on {device}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128140 | WARN | registration failed on path %s for %s. | Cause: LifeKeeper failed to make the registratioin on {path} for {device}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128143 | ERROR | reserve failed (%d) on %s. | Cause: LifeKeeper failed to make reservation for {resource} on {device}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128145 | ERROR | The server ID "%s" returned by "%s" is not valid. | Cause: LifeKeeper failed to generate a valid host {ID}. Action: The ID used to register a device is made up of 1 to 12 Hex digits that uniquely identifies the server in the cluster. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128146 | ERROR | device failure on %s. SYSTEM HALTED. | Cause: LifeKeeper detected failure on {device} and will reboot the server. Action: Check adjacent log messages for further details and related messages. |
128148 | ERROR | device failure on %s. SYSTEM HALTED DISABLED. | Cause: LifeKeeper detected a failure on {device}. The reboot was skipped due to LifeKeeper configuration. Action: Check adjacent log messages for further details and related messages. Turn on the configuration "SCSIHALT" to make the reboot available for any detected device failure. |
128149 | ERROR | device failure or SCSI Error on %s. SENDEVENT DISABLED. | Cause: LifeKeeper detected a failure on {device}. The event generation was skipped due to LifeKeeper configuration. Action: Check adjacent log messages for further details and related messages. Turn on the configuration "SCSIEVENT" to make the sendevent available for any detected device failure. |
128150 | ERROR | %s does not have EXCLUSIVE access to %s, halt system. | Cause: LifeKeeper detected a reservation conflict for {device} on {server} and will reboot the server. Action: Check adjacent log messages for further details and related messages. |
128151 | ERROR | %s does not have EXCLUSIVE access to %s, halt system DISABLED. | Cause: LifeKeeper detected a reservation conflict for {device} on {server}. The reboot was skipped due to LifeKeeper configuration. Action: Check adjacent log messages for further details and related messages. Turn on the configuration "RESERVATIONCONFLICT" to make the reboot available for any detected reservation conflicts. |
128154 | WARN | unable to flush buffers on %s. | Cause: LifeKeeper failed to flush the buffers for {device} during dmmp resource remove. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128157 | WARN | %s utility not found, limited healthcheck for %s. | Cause: LifeKeeper failed to find "dd" utility for the health check of {device}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128160 | ERROR | %s failed to read %s. | Cause: LifeKeeper failed a disk I/O test for {device} when using {utility}. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128163 | ERROR | Registration ID "%s" for "%s" is not valid. | Cause: LifeKeeper failed to generate a valid registration {ID} for {device}. Action: The ID used to register a device is made up of 4 Hex digits derived from the path to the device. Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128170 | ERROR | Usage: canextend <Template system name> <Template tag name> | |
128500 | ERROR | Usage error | Cause: Incorrect arguments have been supplied to the dmmp device restore command preventing it from running. Action: Rerun the command and supply the correct argument list: -t <Resource Tag> and -i <Resource ID> that identifies the dmmp device resource to be restored. |
128504 | ERROR | "%s" resource type is not installed on "%s". | Cause: The Device Mapper Multipath Recovery Kit for dmmp device support is not installed on the system. Action: Install the steeleye-lkDMMP Device Mapper Multipath Recovery Kit rpm on the system. |
128506 | ERROR | Usage error | Cause: Incorrect arguments have been supplied to the dmmp device devShared command preventing it from running. Action: Rerun the command and supply the correct argument list: <Template Resource System Name> and <Template Resource Tag> that identifies the dmmp device resource to be created. |
128507 | FATAL | This script must be executed on "%s". | Cause: An incorrect system name has been supplied as an argument to the devicehier script used to create the dmmp device resource. Action: Supply the correct system name to the devicehier script. The name must match the name of the system on which the command is run. |
128511 | ERROR | Failed to get the ID for the device "%s". Hierarchy create failed. | Cause: The devicehier script used to create the dmmp device resource was unable to determine the SCSI ID for the supplied device. Action: Check that the supplied device path exists and that is for a supported SCSI storage array. |
128512 | ERROR | Failed to get the disk ID for the device "%s". Hierarchy create failed. | Cause: The devicehier script used to create the dmmp disk resource was unable to determine the SCSI ID for the supplied disk. Action: Check that the supplied device path exists and that is for a supported SCSI storage array. |
128513 | ERROR | Failed to create the underlying resource for device "%s". Hierarchy create failed. | Cause: The creation of the underlying dmmp disk resource failed. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128515 | ERROR | Error creating resource "%s" on server "%s" | Cause: The creation of the dmmp device resource failed. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128517 | ERROR | Failed to create dependency "%s"-"%s" on system "%s". | Cause: The parent child dependency creation between the dmmp device and dmmp disk resources failed. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128519 | ERROR | Error creating resource "%s" on server "%s" | Cause: The attempt to bring the newly created dmmp device resource in service has failed. Action: Check adjacent log messages for further details and related messages. You must correct any reported errors before retrying the operation. |
128521 | ERROR | Either TEMPLATESYS or TEMPLATETAG argument missing | Cause: Incorrect arguments have been supplied to the extend command for the dmmp device resource. Action: Rerun the dmmp device resource extend and supply the correct template system and tag names. |
128540 | ERROR | Usage error | Cause: Incorrect arguments have been supplied to the dmmp device getId command used to retrieve the SCSI ID. Action: Rerun the command and supply the correct argument list: -i <device path> or -b <device ID>. |
128541 | ERROR | Usage error | Cause: Incorrect arguments have been supplied to the command used to delete the dmmp device resource. Action: Rerun the command and supply the correct argument list: -t <dmmp device resource tag>. |
128543 | ERROR | device node \"$dev\" does not exist. | Cause: The device node required for restoring the dmmp device resource does not exist. The allocated wait time in restore for udev device creation has been exceed. Action: Rerun the dmmp device resource restore once udev has created the device. |
128544 | ERROR | Usage error | Cause: Incorrect arguments have been supplied to the remove command used to take the dmmp device resource out of service. Action: Rerun the command and supply the correct argument list: -t <dmmp device resource tag>. |
128550 | ERROR | Failed to get path information.(tag=\"$opt_t\" | Cause: OSUquickCheck was called with a tag that did not match a DMMP device. Action: Verify that the correct tag is used. |
128551 | ERROR | Failed to get the status of path.(tag=\"$opt_t\" | Cause: OSUquickCheck was not able to get the 'mode' for the DMMP path for the tag passed in. Action: Check that the DMMP device is configured and responding. |
128552 | ERROR | Failed to check the status of path.(tag=\"$opt_t\" | Cause: OSUquickCheck for the DMMP device does not have an active path. Action: Check the DMMP device is configured properly. |
128553 | ERROR | Failed to get the resource information.(tag=\"$opt_t\" | Cause: OSUquickCheck was not able to load information for the tag specified. Action: Check that the tag is for a DMMP device. |
129100 | FATAL | Failed to load instance from LifeKeeper. | Cause: An invalid resource tag or ID was specified. Action: Check that the tag or ID is valid and re-run the command. |
129103 | FATAL | No resource matches tag \"$self->{‘tag’}\". | Cause: An invalid resource tag was specified. Action: Check the tag and re-run the command. |
129104 | FATAL | An error occurred setting LifeKeeper resource information | Cause: An internal error has occurred in LifeKeeper. |
129110 | ERROR | Could not get the Elastic Network Interface ID for $dev | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129111 | ERROR | Failed to get Allocation ID of Elastic IP \"$elasticIp\". | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129113 | ERROR | Failed to get my instance ID. | Cause: The EC2 instance metadata access failed. Action: Check the Amazon console and retry the operation. |
129114 | ERROR | Failed to get ENI ID. | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129116 | ERROR | Failed to associate Elastic IP \"$self->{‘EIP’}\" on \"$self->{‘DEV’}\". | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129118 | WARN | $self->{‘EIP’} is not associated with any instance. | Cause: The Elastic IP is not associated with any instance. Action: LifeKeeper will try to fix the issue by calling the EC2 API to associate the Elastic IP. Check adjacent log messages for more details. |
129119 | WARN | $self->{‘EIP’} is associated with another instance. | Cause: The Elastic IP is associated with another instance. Action: LifeKeeper will try to fix the issue by calling the EC2 API to associate the Elastic IP. Check adjacent log messages for more details. |
129120 | ERROR | Failed to recover Elastic IP. | Cause: The EC2 API call failed to associate the Elastic IP. Action: Check the network and the Amazon console and retry the operation. |
129121 | ERROR | Recovery process ended but Elastic IP is not associated with this instance. Please check AWS console. | Cause: The EC2 API call failed to associate the Elastic IP. Action: Check the network and the Amazon console and retry the operation. |
129122 | ERROR | Error creating resource \"$target_tag\" with return code of \"$err\". | Cause: LifeKeeper was unable to create the resource instance on the server. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
129123 | ERROR | Failed to get ENI ID. | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129124 | WARN | $self->{‘EIP’} is associated with another network interface. | Cause: The Elastic IP is associated with the proper instance, but the wrong ENI. Action: LifeKeeper will try to fix the issue by calling the EC2 API to associate the Elastic IP. Check adjacent log messages for more details. |
129125 | ERROR | Link check failed for interface \’$dev\’. | Cause: The requested interface is showing ‘NO-CARRIER’ indicating that no link is present. Action: Check the network interface and bring the link up. |
129126 | ERROR | Link check failed for interface \’$dev\’. Reason: down slave. | Cause: The requested interface is showing ‘NO-CARRIER’ indicating that no link is present. Action: Check the network interface and bring the link up. |
129127 | ERROR | Link check failed for interface \’$dev\’. | Cause: The call to "nmcli connection show —active" did not return that $dev was active. |
129129 | WARN | The link for network interface \’$self->{‘DEV’}\’ is down. Attempting to bring the link up. | Cause: The requested interface is showing ‘NO-CARRIER’ indicating that no link is present. Action: LifeKeeper will try to fix the issue by bringing the link up and associating the Elastic IP with the interface. Check adjacent log messages for more details. |
129130 | ERROR | Failed to modify \"$opt_t\" to end pint URL \"$endpoint\". | |
129137 | ERROR | The link for network interface \’$self->{‘DEV’}\’ is still down. | Cause: LifeKeeper could not bring the link up. Action: Ensure the interface is enabled and up. Check adjacent log messages for more details. |
129139 | WARN | The link for network interface \’$self->{‘DEV’}\’ is down. | Cause: The requested interface is showing ‘NO-CARRIER’ indicating that no link is present. Action: Check the network interface and bring the link up. |
129140 | ERROR | Could not get ENI ID for $self->{IP}. | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129142 | ERROR | Failed to update route table | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129143 | ERROR | You must have exactly one IP address resource as the parent of the RouteTable EC2 resource. Please reconfigure your resource hierarchy. | Cause: The Route Table EC2 resource must have one and only one IP resource as a parent. Action: Repair the resource hierarchy as necessary. |
129144 | ERROR | $func called with invalid timeout: $timeout | Cause: An invalid timeout value was specified in the /etc/default/LifeKeeper file. Action: Verify all EC2_*_TIMEOUT settings are valid in /etc/default/LifeKeeper. |
129145 | ERROR | $func action timed out after $timeout seconds | Cause: The action did not complete within the timeout period. Action: Consider increasing the EC2_*_TIMEOUT value for the given action (in /etc/default/LifeKeeper). |
129146 | ERROR | failed to run $func with timeout: $@ | Cause: This is an internal error. |
129148 | ERROR | Amazon describe-route-tables call failed (err=%s)(output=%s)\n%s | Cause: Failed to call the AWS CLI command. Action: Check your environment by referring to the displayed troubleshooting information. |
129150 | ERROR | Elastic IP \"$elasticIp\" is associated with another instance. | Cause: The Elastic IP is not associated with the proper instance. Action: LifeKeeper will try to fix the issue by calling the EC2 API to associate the Elastic IP. Check adjacent log messages for more details. |
129151 | ERROR | Could not get the Association ID for Elastic IP \"$elasticIp\". | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129152 | ERROR | Failed to disassociate Elastic IP \"$self->{‘EIP’}\" on \"$self->{‘DEV’}\". | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129153 | ERROR | Failed to disassociate Elastic IP \"$elasticIp\", (err=%s)(output=%s | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129154 | ERROR | Amazon describe-addresses call failed (err=%s)(output=%s | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129155 | ERROR | Amazon describe-address call failed (err=%s)(output=%s | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129157 | ERROR | failed to get metadata of \"%s\" | Cause: The EC2 instance metadata access failed. Action: Check the Amazon console and retry the operation. |
129159 | ERROR | Amazon associate-address call failed (err=%s)(output=%s | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129160 | ERROR | Amazon describe-addresses call failed (err=%s)(output=%s | Cause: The EC2 API call failed, possibly due to a network issue. Action: Check the network and the Amazon console and retry the operation. |
129161 | ERROR | Error deleting resource \"$otherTag\" on \"$otherSys\" with return code of \"$err\". | Cause: LifeKeeper was unable to delete the resource instance on the server. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
129162 | ERROR | Could not getRouteTablesByIP | |
129163 | ERROR | Could not getRouteTablesByIP | |
129164 | ERROR | [$SUBJECT event] mail returned $err | |
129165 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | |
129167 | ERROR | snmptrap returned $err for Trap 180 | |
129168 | ERROR | COMMAND OUTPUT: cat /tmp/err$$ | |
129170 | ERROR | This resource is in the old format. Please update. | |
129171 | ERROR | Error disabling sourceDest checks with return code of \"$ret\". | |
129172 | ERROR | Failed to lookup IP from resource $self->{‘tag’}. | |
129173 | ERROR | Failed to lookup device GUID for IP resource $iptag. | |
129174 | ERROR | Could not find eni_id for device GUID $dev. | |
129175 | ERROR | Failed to disable sourceDestChecks. (err=%s)(output=%s | |
129176 | ERROR | Failed to disable sourceDestChecks for $eni_id. (err=%s)(output=%s | |
129177 | ERROR | Error disabling sourceDest checks with return code of \"$ret\". | |
129178 | ERROR | Failed to disable sourceDestChecks for $eni_id. (err=%s)(output=%s | |
129180 | ERROR | Elastic Network Interface $eni sourceDestChecks are enabled. | |
129181 | WARN | WARNING Failed to find IP resources since EC2 resource is extended first. Skipping sourceDestChecks. | |
129182 | WARN | WARNING SourceDestCheck was skipped because Amazon describe-network-interface-attribute call failed. Please check if ec2:DescribeNetworkInterfaceAttribute and ec2:ModifyNetworkInterfaceAttribute are granted. | |
129184 | WARN | Successfully disassociated Elastic IP \"$self->{‘EIP’}\" on \"$self->{‘DEV’}\", but this Elastic IP is associated with the other instance or the other ENI (NIC). | |
129185 | WARN | $self->{‘EIP’} is not associated because of the anything error. | |
129186 | WARN | WARNING Failed to find IP resources since EC2 resource is extended first. Skipping sourceDestChecks. | |
129403 | ERROR | END failed create of $TAG due to a $sig signal | Cause: The create process was interrupted by a signal. |
129409 | ERROR | The IP resource $IP_RES is not \"ISP\". | Cause: The IP resource is not in service. Action: Bring the resource in service and retry the operation. |
129410 | ERROR | Could not find IP resource $IP_RES | Cause: Ensure that the IP resource exists and retry the operation. |
129412 | ERROR | EC2 resource $ID is already protected | Cause: A resource with the specified ID already exists. Action: Make sure to clean up any remnants of an old resource before re-creating a new resource. |
129416 | ERROR | Error creating resource \"$TAG\" with return code of \"$lcderror\". | Cause: LifeKeeper was unable to create the resource instance. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
129418 | ERROR | Dependency creation between \"$IP_RES\" and \"$TAG\" failed with return code of \"$lcderror\". | Cause: LifeKeeper was unable to create the resource dependency. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
129420 | ERROR | In-service failed for tag \"$TAG\". | Cause: LifeKeeper could not bring the resource instance into service. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
129423 | ERROR | Could not get ENI ID for $dev. | |
129425 | ERROR | Failed to update route table | |
129426 | ERROR | In-service (dummy) failed for tag \"$TAG\". | |
129800 | ERROR | canextend checks failed for \"$self->{‘tag’}\" (err=$ret | Cause: The pre-extend checks failed for the target server. Action: Check adjacent log messages for further details and related messages. Correct any reported errors. |
129801 | ERROR | canextend checks failed for \"$self->{‘tag’}\". EC2_HOME \"$self->{EC2_HOME}\" does not exist on $me. | |
133106 | ERROR | You must have exactly one IP address resource as the child of the route53 resource. Please reconfigure your resource hierarchy. | |
133111 | ERROR | Failed to init the object. | |
133114 | ERROR | Failed start Route53 resource $self->{Tag} | |
133115 | ERROR | Failed start Route53 resource $self->{Tag} | |
133116 | WARN | Could not Get A Record Value Address : $aAWSresult | |
133117 | WARN | aws call failed : $aAWSresult | |
133118 | WARN | aws call failed : $upsertResp | |
133119 | ERROR | Could not Update and Create A Record Set : $upsertResp | |
133120 | WARN | Could not get Route53 API batch request ID form UPSERT response XML data : $upsertResp | |
133122 | WARN | aws call failed : $statusResult | |
133123 | ERROR | Faild check change batch request status : $statusResult | |
133126 | WARN | Failed to Route53 API response : $rc | |
133129 | ERROR | Route53 resource $self->{Tag} is stopped | |
133130 | ERROR | $self->{Tag} is in the old format. Please update. | |
133601 | ERROR | $usage | |
133602 | ERROR | Template resource \"$template_tag\" on server \"$template_sys\" does not exist | |
133608 | ERROR | Template resource \"$template_tag\" on server \"$template_sys\" does not exist | |
133614 | ERROR | Error creating resource \"$Tag\" on server \"$me\" | |
133617 | ERROR | END failed hierarchy extend of resource $Tag with return value of $ecode. | |
133619 | ERROR | $usage | |
133620 | ERROR | END failed extend of \"$Tag\" due to a \"$sig\" signal | |
133700 | ERROR | $usage | |
133701 | ERROR | END failed create of \"$Route53Tag\" due to a \"$sig\" signal | |
133703 | ERROR | Unable to getlocks on $me during resource create. | |
133704 | ERROR | The route53 on $Route53HostName is already protected by LifeKeeper. | |
133706 | ERROR | Error creating resource $Route53Tag. Error ($rc | |
133708 | ERROR | Dependency create failed between $Route53Tag and $Route53IPResTag. Error ($rc). | |
133710 | ERROR | In-service attempted failed for tag $Route53Tag. | |
133813 | ERROR | Usage: $usage | |
133815 | ERROR | The host name \"$hostname\" is too long. | |
133816 | ERROR | The host name \"$hostname\" is too short. | |
133817 | ERROR | The host name \"$hostname\" contains invalid character. | |
133818 | ERROR | The first character must be an alpha character. | |
133819 | ERROR | The last character must not be a minus sign or period. | |
133826 | ERROR | Host Name cannot be blank. | |
134003 | ERROR | catch a \"$sig\" signal | Cause: The “create” process was interrupted by a signal. Action: Check adjacent log messages. |
134004 | ERROR | Unable to getlocks on $server during resource create. Error ($rc | Cause: Failed to get the administrative lock when creating a resource hierarchy. Action: Check the logs for related errors and try to resolve the reported problem. |
134005 | ERROR | The service \"$serviceName\" is not supported on $server. Error ($rc | Cause: The service does not exist or cannot be protected. Action: Input the appropriate service name. |
134006 | ERROR | The service \"$serviceName\" is already protected on $server. | Cause: This service is already protected. Action: Can not create a resource for the protection of this service. |
134007 | ERROR | Error creating resource $tag. Error ($rc | Cause: LifeKeeper was unable to create the resource instance. Action: Check adjacent log messages. Correct the cause of the error. |
134011 | ERROR | In-service attempted failed for tag $tag. | Cause: Failed to restore QSP resource. Action: Check the log related to the service that you want to protect. Resolve the problem. |
134015 | ERROR | Unable to rlslocks on $server during resource create. Error ($rc | Cause: Failed to release lock after QSP resource created. Action: Check the logs for related errors and try to resolve the reported problem. |
134103 | ERROR | Template resource \"$template_tag\" on server \"$template_sys\" does not exist | Cause: The resource cannot be found on the template server. Action: Ensure the hierarchy is correct on the template server before extending. |
134104 | ERROR | Template resource \"$template_tag\" on server \"$template_sys\" is not QSP resource (app=$ins1, res=$ins2 | Cause: The template resource is not QSP resource. Action: Expand to the same type of resources as a template resource. |
134105 | ERROR | The service \"$service\" is not supported on $me. Error ($check | Cause: The service does not exist on target server. Action: Install the service on target server before extending. |
134106 | ERROR | The service \"$service\" is already protected on $me. | Cause: There is already a resource of the same ID on target server. Action: Cannot create the resource of the same service. |
134203 | ERROR | catch a \"$sig\" signal | Cause: The “extend” process resource was interrupted by a signal. Action: Check adjacent log messages. |
134204 | ERROR | Template resource \"$template_tag\" on server \"$template_sys\" does not exist | Cause: The resource cannot be found on the template server. Action: Ensure the hierarchy is correct on the template server before extending. |
134208 | ERROR | Error creating resource \"$tag\" on server \"$me\" | Cause: LifeKeeper was unable to create the resource instance on target server. Action: Check adjacent log messages. Correct the cause of the error. |
134401 | ERROR | timeout $cmd for \"$tag\". Forcibly terminating. | Cause: The “restore” process of the service does not terminate within the specified time. Action: Check about the protected service and retry the “restore” operation. Also check the logs for related errors and try to resolve the reported problem. |
134405 | FATAL | Failed to fork process to execute systemctl command: $! | Cause: Failed to fork. This is a system error. Action: Determine why fork fails. |
134407 | ERROR | systemctl command has failed for \"$tag\" | Cause: Failed to execute service command. Action: It is an error to manually run the service command with “start” option. Correct the cause of the error by reference to error message. |
134501 | ERROR | timeout $cmd for \"$tag\". Forcibly terminating. | Cause: The “remove” process of the service does not terminate within the specified time. Action: Check about the protected service and retry the “remove” operation. Also check the logs for related errors and try to resolve the reported problem. |
134505 | FATAL | Failed to fork process to execute service command: $! | Cause: Failed to fork. This is a system error. Action: Determine why fork fails. |
134507 | ERROR | systemctl command has failed for \"$tag\" | Cause: Failed to execute service command. Action: It is an error to manually run the service command with “stop” option. Correct the cause of the error by reference to error message. |
134601 | ERROR | timeout $cmd for \"$tag\". Forcibly terminating. | Cause: The "quickCheck" process will be forcibly terminated due to a waiting time over the user defined timeout. Action: Check about the protected service. Also check the logs for related errors and try to resolve the reported problem. |
134605 | FATAL | Failed to fork process to execute systemctl command: $! | Cause: Failed to fork. This is a system error. Action: Determine why fork fails. |
134607 | ERROR | systemctl command has failed for \"$tag\" | Cause: Failed to execute service command. Action: It is an error to manually run the service command with “status” option. Correct the cause of the error by reference to error message. |
134701 | ERROR | timeout $cmd for \"$tag\". Forcibly terminating. | Cause: The “recover” process of the service does not terminate within the specified time. Action: Check about the protected service. Also check the logs for related errors and try to resolve the reported problem. |
134706 | FATAL | Failed to fork process to execute systemctl command: $! | Cause: Failed to fork. This is a system error. Action: Determine why fork fails. |
134708 | ERROR | systemctl command has failed for \"$tag\" | Cause: Failed to execute service command. Action: It is an error to manually run the service command with “start” option. Correct the cause of the error by reference to error message. |
134803 | ERROR | tag \"$tag\" does not exist on server \"$me\" | Cause: The specified tag does not exist. This is an internal error. |
134804 | ERROR | app type \"$ins1\" is not $app | Cause: The specified tag is not QSP resource. This is an internal error. |
134805 | ERROR | res type \"$ins2\" is not $res | Cause: The specified tag is not QSP resource. This is an internal error. |
134823 | ERROR | tag \"$tag\" does not exist on server \"$me\" | Cause: The specified tag does not exist. This is an internal error. |
134824 | ERROR | app type \"$ins1\" is not $app | Cause: The specified tag is not QSP resource. This is an internal error. |
134825 | ERROR | res type \"$ins2\" is not $res | Cause: The specified tag is not QSP resource. This is an internal error. |
134843 | ERROR | tag \"$tag\" does not exist | Cause: The specified tag does not exist. This is an internal error. |
134844 | ERROR | app type \"$ins1\" is not $app | Cause: The specified tag is not QSP resource. This is an internal error. |
134845 | ERROR | res type \"$ins2\" is not $res | Cause: The specified tag is not QSP resource. This is an internal error. |
135002 | ERROR | Socket connection to $host failed (not a timeout): $@ | |
135008 | ERROR | Failed to lock $QUORUM_LOCK_FILE: $!. | |
135009 | ERROR | Failed to open $QUORUM_LOCK_FILE: $! | |
135010 | ERROR | Failed to lock $QUORUM_LOCK_FILE: $!. | |
135011 | ERROR | Failed to open $QUORUM_LOCK_FILE: $! | |
135802 | ERROR | Periodic storage quorum check has not been done for %lld ms, which exceeds 2*QWK_STORAGE_HBEATTIME (%lld) ms. | |
135806 | ERROR | qwk_config() failed. | |
135807 | ERROR | thread_initialize() failed. | |
135808 | ERROR | state_monitor_initialize() failed. | |
135809 | ERROR | state_monitor() failed. | |
135810 | ERROR | start_server() failed. | |
135812 | ERROR | ‘fopen’ for %s failed: %m | |
135813 | ERROR | ‘qwk_object_path=’ line cannot be found for %s. | |
135814 | ERROR | ‘%s’ is unknown config. | |
135815 | ERROR | configuration of hbeattime ‘%d’ is incorrect. | |
135816 | ERROR | configuration of numhbeats ‘%d’ is incorrect. | |
135817 | ERROR | configuration of timeout_multiplier ‘%d’ is incorrect. | |
135818 | ERROR | configuration of lcmhbeattime ‘%d’ is incorrect. | |
135819 | ERROR | configuration of lcmnumhbeats ‘%d’ is incorrect. | |
135820 | ERROR | configuration of qwk_object_type is incorrect. | |
135821 | ERROR | configuration of my_node is incorrect. | |
135822 | ERROR | configuration of number_of_node ‘%d’ is incorrect. | |
135823 | ERROR | configuration of number_of_object ‘%d’ is incorrect. | |
135824 | ERROR | configuration of object node is incorrect. | |
135825 | ERROR | configuration of object path is incorrect. | |
135826 | ERROR | my_node is not include in qwk_objects. | |
135827 | ERROR | ‘open’ for %s failed: %m | |
135828 | ERROR | ‘read’ for %s failed: %m | |
135829 | ERROR | ‘popen’ for %s failed: %m | |
135830 | ERROR | ‘open’ for %s failed: %m | |
135831 | ERROR | ‘write’ for %s failed: %m | |
135832 | ERROR | ‘popen’ for %s failed: %m | |
135833 | ERROR | (bug) buffer overflow | |
135834 | ERROR | (bug) data is corrupted | |
135835 | ERROR | ‘signature=’ line cannot be found. | |
135836 | ERROR | signature ‘%s’ does not match. | |
135837 | ERROR | ‘local_node=’ line cannot be found. | |
135838 | ERROR | local_node ‘%s’ does not match. | |
135839 | ERROR | ‘time=’ line cannot be found. | |
135840 | ERROR | ‘sequence=’ line cannot be found. | |
135841 | ERROR | sequence ‘%s’ scan failed. | |
135842 | WARN | ‘node=’ line cannot be found. index=%d | |
135843 | ERROR | ‘commstat=’ line cannot be found. index=%d | |
135844 | ERROR | ‘checksum=’ line cannot be found. | |
135845 | ERROR | checksum ‘%s’ scan failed. | |
135846 | ERROR | checksum does not match. | |
135849 | ERROR | qwk object was not found. | |
135851 | ERROR | failed to read qwk object. | |
135852 | ERROR | failed to decode node_info. | |
135854 | WARN | sequence backed down from %llu to %llu. | |
135855 | ERROR | ‘malloc’ for %zu failed: %m | |
135856 | ERROR | thread_create() failed. index=%d | |
135870 | ERROR | (bug) data is corrupted | |
135871 | ERROR | format error in request. | |
135872 | ERROR | format error in quorum_verify request. lkevent cannot be found. | |
135873 | ERROR | format error in witness_verify request. lkevent cannot be found. | |
135874 | ERROR | format error in witness_verify request. target_node cannot be found. | |
135875 | ERROR | ‘%s’ is unknown command. | |
135877 | ERROR | qwk_receive() did not receive full header. Close socket. | |
135878 | ERROR | Request is too long. Close socket. | |
135879 | ERROR | qwk_receive() did not receive full request. Close socket. | |
135880 | ERROR | do_request() failed. | |
135881 | ERROR | qwk_send() did not send full header. Close socket. | |
135882 | ERROR | qwk_send() did not send full response. Close socket. | |
135884 | ERROR | qwk_accept() failed. | |
135885 | ERROR | thread_create() failed. | |
135886 | ERROR | cannot create socket. | |
135887 | ERROR | ‘bind’ failed: %m | |
135888 | ERROR | ‘listen’ failed: %m | |
135889 | ERROR | create_sockets() failed. | |
135890 | ERROR | thread_create() failed. | |
135891 | ERROR | ‘pthread_attr_init’ failed: %m | |
135892 | ERROR | ‘pthread_attr_setstacksize’ failed: %m | |
135893 | ERROR | ‘pthread_create’ failed: %m | |
135894 | ERROR | requst is too long. header.size=%zu | |
135895 | ERROR | qwk_send() failed for header. | |
135896 | ERROR | qwk_send() failed for request. | |
135897 | ERROR | qwk_send() failed for termination. | |
135898 | ERROR | qwk_receive() did not receive full header. | |
135899 | ERROR | Response buffer is not enough large. Server sent %zu bytes. | |
135900 | ERROR | qwk_receive() did not receive full response. | |
135901 | ERROR | ‘%s’ is unknown command. | |
135903 | ERROR | Cannot create socket. | |
135904 | ERROR | ‘connect’ failed: %m | |
135905 | ERROR | request_send() failed. | |
135906 | ERROR | request_receive() failed. | |
135907 | ERROR | ‘%s’ is unknown lkevent. | |
135908 | ERROR | ‘%s’ is unknown qwktype. | |
135909 | ERROR | ‘%s’ is unknown node state. | |
135910 | ERROR | ‘%s’ is unknown quorum state. | |
135911 | ERROR | ‘accept’ failed: %m | |
135912 | ERROR | You must install the LifeKeeper license key for Storage Quorum Witness Kit. | |
135913 | ERROR | Cannot find the configuration for node ‘%s’ in the configuration file. If the configuration of the cluster is changed after the initialization, it needs to be initialized again. Please see the SIOS product documentation for information on how to reinitialize using qwk_storage_init command. | |
135914 | WARN | The first cycle did not complete before timeout (%lld ms). | Cause: Initial write to Quorum object in Storage mode timed out in the indicated time (in milliseconds). Action: Set larger values for QWK_STORAGE_HBEATTIME and QWK_STORAGE_NUMHBEATS. (either one or both) If the 135802 message is frequently output after this message is output, please refer to the troubleshooting section on the storage mode page of Quorum/Witness. |
135916 | WARN | The first cycle did not complete before timeout (%lld ms). | Cause: Initial write to Quorum object in Storage mode timed out in the indicated time (in milliseconds). Action: Set larger values for QWK_STORAGE_HBEATTIME and QWK_STORAGE_NUMHBEATS. (either one or both) If the 135802 message is frequently output after this message is output, please refer to the troubleshooting section on the storage mode page of Quorum/Witness. |
135917 | ERROR | params is NULL. return 0 as timeout msec. | Cause: There was a problem with the internal process. Action: Ask support to investigate. |
135918 | ERROR | s3_read_qwk_object for %s failed: %d (%s | |
135919 | ERROR | s3_write_qwk_object for %s failed: %d (%s | |
135920 | ERROR | %s is empty | |
135921 | ERROR | mkstemp for %s failed: %m | |
135922 | ERROR | mkstemp for %s failed: %m | |
135998 | EMERG | An unrecoverable error was detected in qwk_storage_server. Stop restarting qwk_storege_server. | |
135999 | ERROR | -c $cmd | |
136002 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA create script. Action: Please provide appropriate arguments in the form: <Resource Tag> <SAP SID> <HDB Instance> [Switchback Type] [Virtual IP Resource Tag] |
136003 | ERROR | END failed create of resource $tag on server $me with return value of $errcode. | Cause: Failure during SAP HANA resource creation. Action: Verify that SAP HANA System Replication is fully configured and enabled on both the primary and secondary replication sites and reattempt the resource creation operation. |
136005 | ERROR | An unknown error has occurred in utility rlslocks on server $me. View the LifeKeeper logs for details and retry the operation. | Cause: Failure of the LifeKeeper rlslocks utility during SAP HANA resource creation. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource creation operation. |
136006 | ERROR | END failed create of resource $tag on server $me with signal $sig. | Cause: Failure during SAP HANA resource creation due to a signal. Action: Review the LifeKeeper log file for details. |
136008 | ERROR | An unknown error has occurred in utility getlocks on server $me. View the LifeKeeper logs for details and retry the operation. | Cause: Failure of the LifeKeeper getlocks utility during SAP HANA resource creation. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource creation operation. |
136009 | ERROR | The SAP HANA product was not found in the directory $obj->{‘hana_util_path’} on server $me. Verify that SAP HANA is correctly installed and that all necessary file systems are mounted. | Cause: Required SAP HANA binaries could not be located during resource creation. Action: Verify that SAP HANA is correctly installed and configured on all servers in the cluster. |
136010 | ERROR | Failed to create resource as id $id already exists on system $me. | Cause: A LifeKeeper resource with the given ID already exists on the system. Action: Check whether the SAP HANA database is already protected by LifeKeeper. |
136011 | ERROR | Failed to create new tag $tag for SAP HANA resource on $me. | Cause: The provided SAP HANA resource tag is already in use by another LifeKeeper resource and the LifeKeeper newtag utility failed to create a new tag. Action: Choose a different tag name for the SAP HANA resource. |
136012 | ERROR | Failed creation of resource with $tag on system $me. | Cause: Failed to create the given SAP HANA resource in LifeKeeper. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource creation operation. |
136014 | ERROR | Failed to create resource dependency for parent $tag and child $virtual_ip_tag. | Cause: Failed to create a dependency between the SAP HANA resource and its dependent virtual IP resource. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource creation operation. |
136015 | ERROR | The info field for resource $tag could not be successfully generated using values [SID: $info_sid, Instance: $info_instance, Replication Mode: $info_repl_mode, Site Name: $info_site_name, Operation Mode: $info_oper_mode]. If using SAP HANA System Replication, please verify that it is fully configured and enabled on both the primary and secondary systems before creating the SAP HANA resource. | Cause: An invalid value was found when creating the SAP HANA resource info field. Action: Verify that SAP HANA is properly installed and configured and that SAP HANA System Replication is fully configured and enabled on all servers in the cluster. |
136016 | ERROR | The selected server $me is not the primary/source system for SAP HANA System Replication for the selected SID $sid and HDB instance $instance. Please select ‘Cancel’ and start this action on the primary/source HANA System Replication system. | Cause: Resource creation is initiated on a secondary system in HANA System Replication. Action: Initiate the create action on a primary system in HANA System Replication. |
136031 | ERROR | END failed extend of resource $target_tag on server $me with return value of $err_code. | Cause: Failure during SAP HANA resource extension. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource extension operation. |
136033 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA extend script. Action: Please provide appropriate arguments in the form: <Template System> <Template Tag> <Switchback Type> <Target Tag> |
136035 | ERROR | Template resource $template_tag on server $template_sys does not exist. | Cause: The template SAP HANA resource to be extended does not exist on the template server. Action: Verify that the SAP HANA resource that is being extended exists on the template server. |
136036 | ERROR | Resource with matching id $target_id already exists on server $me for App $app_type and Type $res_type. | Cause: An SAP HANA resource with the same LifeKeeper ID already exists on the target system. Action: Check whether the SAP HANA database is already protected by LifeKeeper on the target server. |
136037 | ERROR | Resource with matching tag $target_tag already exists on server $me for App $app_type and Type $res_type | Cause: An SAP HANA resource with the same LifeKeeper resource tag already exists on the target server. Action: Check whether the SAP HANA database is already protected by LifeKeeper on the target server. |
136039 | ERROR | Error creating resource $target_tag on system $me. | Cause: Failed to create an equivalent SAP HANA resource on the target server. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource extension operation. |
136040 | ERROR | The target tag ($target_tag) and template tag ($template_tag) must be the same. | Cause: Resource tag name used on primary system is different than resource tag name used on secondary system. Both must be same. Action: While resource creation use same name as primary system tag and secondary system tag. |
136041 | ERROR | Error getting resource information for $template_tag on server $template_sys. | Cause: Failed to obtain information about the given SAP HANA resource on the given server. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136042 | ERROR | Failed to update info fields for equivalent resources while extending resource $target_tag on $me. | Cause: Failed to obtain information about the given SAP HANA resource on the given server. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136045 | ERROR | Cannot extend resource $template_tag to server $me. | Cause: The SAP HANA canextend script indicates that the resource cannot be extended to the given target system. Action: Resolve any issues found in the LifeKeeper log file and reattempt the resource extension operation. |
136047 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA canextend script. Action: Please provide appropriate arguments in the form: <Template System> <Template Tag> |
136048 | ERROR | Resource $template_tag does not exist on server $template_sys. | Cause: The template SAP HANA resource to be extended does not exist on the template server. Action: Verify that the SAP HANA resource that is being extended exists on the template server. |
136049 | ERROR | The system user $hana_user does not exist on server $me. | Cause: The SAP Administrative User for the SAP HANA database does not exist on the given server. Action: Verify that SAP HANA is properly installed and configured. |
136050 | ERROR | The user id for user $hana_user ($template_uid) on template server $template_sys is not the same as user id ($uid) on target server $me. | Cause: The user ID for the SAP Administrative User differs between the template and target servers. Action: Verify that SAP HANA is properly installed and configured on all servers in the cluster. |
136051 | ERROR | The group id for user $hana_user ($template_gid) on template server $template_sys is not the same as group id ($gid) on target server $me. | Cause: The group ID for the SAP Administrative User differs between the template and target servers. Action: Verify that SAP HANA is properly installed and configured on all servers in the cluster. |
136052 | ERROR | The home directory for user $hana_user ($template_home) on template server $template_sys is not the same as home directory ($user_home) on target server $me. | Cause: The home directory for the SAP Administrative User differs between the template and target servers. Action: Verify that SAP HANA is properly installed and configured on all servers in the cluster. |
136053 | ERROR | The SAP HANA instance $instance does not exist for $sid on server $me. | Cause: Installation directories for the given SAP HANA database instance could not be located. Action: Verify that SAP HANA is properly installed and configured on all servers in the cluster. |
136055 | ERROR | The SAP HANA site name $target_obj->{‘site_name’} on server $me must be different from site name $template_obj->{‘site_name’} on $template_obj->{‘sys’}. | Cause: The SAP HANA System Replication site name is the same on both the primary and secondary servers. Action: Stop the SAP HANA database on the secondary server and use the hdbnsutil utility to re-register the secondary replication site using a different site name. |
136056 | ERROR | Unable to obtain SAP HANA System Replication parameters for database $instance on server $me. Please verify that SAP HANA System Replication is enabled and properly configured and that the database instance is running on all servers in the cluster. | Cause: SAP HANA System Replication parameters could not be determined for the database on the given system. Action: Verify that SAP HANA System Replication is enabled and properly configured and that the database is running on all servers in the cluster. |
136057 | ERROR | Unable to create a HANA object for database $instance on server $me. Please verify that database instance $instance is properly installed. | Cause: Unable to create an internal hana object representing the given instance on the given server. Action: Verify that the database instance is properly installed and that all necessary file systems are mounted. |
136062 | ERROR | Invalid operation mode: $oper_mode. Acceptable values: $valid_oper. | Cause: An invalid operation mode was used when extending a SAP HANA resource in a multi-target configuration. Action: The acceptable operation modes are listed in the message. |
136065 | ERROR | Failed to update the instance info field. | Cause: The info field for the SAP HANA resource was not updated to allow multi-target support. Action: Make sure all systems in the cluster have LifeKeeper running and the resource in-service. |
136066 | ERROR | The SAP HANA resource $template_tag is configured on the maximum (%d) supported number of servers. | Cause: The SAP HANA resource has been configured on the maximum number of servers that the LifeKeeper HANA recovery kit supports. Action: The maximum number of servers supported by the LifeKeeper HANA recovery kit can not be changed. |
136067 | ERROR | The SAP HANA resource $template_tag with \"delta_datashipping\" is configured on the maximum (2) supported number of servers. | Cause: SAP does not support delta_datashipping in a multitarget configuration. Action: Use a supported operation mode. |
136083 | ERROR | Usage: $usage | Cause: Invalid arguments in the SAP HANA delete script. Action: Provide both a valid HANA LifeKeeper resource tag name and resource ID. Usage: delete -t <tag> -i <id> [-U] |
136096 | ERROR | One of the required parameters (server, tag, or action) was missing. Unable to set the local recovery policy. | Cause: One of the required parameters (server, tag, or action) was not provided. Action: Provide the required parameters and reattempt the operation. |
136097 | ERROR | Failed to $flg_action local recovery for SAP HANA resource $tag on server $node. | Cause: Failed to enable or disable local recovery for the given SAP HANA resource on the given server. Action: Verify that LifeKeeper is running and fully initialized, then reattempt the operation. |
136099 | ERROR | Failed start of SAP Host Agent on server $sys. | Cause: Failed to start the SAP Host Agent processes (i.e., saphostexec, saposcol) on the given server. Action: Check SAP Host Agent process trace logs and correct any issues found, then reattempt the operation. |
136100 | ERROR | Failed start of SAP Host Agent on server $sys. | Cause: Failed to start SAP Host Agent processes (i.e., saphostexec, saposcol) on the given server. Action: Inspect the SAP Host Agent process trace logs and correct any issues found, then reattempt the operation. |
136101 | ERROR | Failed to register server $sys as a secondary SAP HANA System Replication site. | Cause: Failed to register the given server as a secondary site for the given database in SAP HANA System Replication. Action: Inspect the SAP HANA trace files (e.g., nameserver_<hostname>.xxxxx.xxx.trc) for more details. |
136119 | WARN | Resource $eqv_tag not found on server $sys. | Cause: While generating info for a site, an equivalent resource was unavailable for updating. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136124 | ERROR | Failed to update info field for resource $eqv_tag on server $sys from [$orig_info] to [$new_info]. | Cause: An internal error has occurred in LifeKeeper. Action: An attempt to rollback the info fields may occur based on configuration, otherwise a manual update may be required. |
136127 | WARN | WARNING: Failed to roll back to original info fields after failed update. Temporary mismatches may exist in connection types between servers until they can be successfully synchronized. | Cause: An internal error has occurred in LifeKeeper. Action: An attempt to roll back an ins_list info field has failed. Another attempt may occur during a quickCheck cycle, otherwise a manual configuration may be required. |
136152 | WARN | There was an error retrieving the HSR status from the \"$self->{‘hana_sysreplstatus_script’}\" script for site $site_id, setting the value to \"$value\". | Cause: The attempt to retrieve the SAP HANA System Replication status failed. Action: Inspect the LifeKeeper and SAP log files to determine the cause of the failure and manually correct any issues found. While the SAP HANA resource is in-service, LifeKeeper will automatically continue attempting to read and update the status as it is available. |
136160 | ERROR | Unable to create SAP HANA object. The SID and instance for the SAP HANA database must be provided. | Cause: Either the SAP SID or the SAP HANA instance name were missing while trying to create an SAP HANA object. Action: Inspect the LifeKeeper log file for more details. |
136161 | ERROR | Unable to create SAP HANA object. Resource system name and tag name must be provided. | Cause: Either the system name or resource tag name were missing while trying to create an SAP HANA object. Action: Inspect the LifeKeeper log file for more details. |
136162 | ERROR | Could not find any information regarding resource $tag on $sys. | Cause: Failed to obtain information about the SAP HANA resource with the given tag. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136168 | ERROR | Unable to check status of SAP Host Agent on server $self->{‘sys’}. Command \"$curr_cmd\" returned exit code $ret. | Cause: Failed to determine the status of the SAP Host Agent processes on the given server. Action: Inspect the SAP Host Agent trace files (e.g., dev_saphostexec) for more details. |
136174 | ERROR | Unable to create SAP HANA object. Input value is missing: sys:($self->{‘sys’}, SID:$self->{‘sid’}, instance:$self->{‘instance’}, instance_number:$self->{‘instance_number’}). | Cause: Either the SAP SID, the SAP HANA instance name, or one of the SAP HANA System Replication values were missing while trying to create an SAP HANA object. Action: Inspect the LifeKeeper log file for more details. |
136190 | ERROR | Failed to start SAP Host Agent processes on server $self->{‘sys’}. Command \"$hostagent_cmd\" returned $ret. | Cause: Failed to start the SAP Host Agent processes on the given server. Action: Inspect the SAP Host Agent trace files (e.g., dev_saphostexec) for more details. |
136191 | ERROR | Failed to start SAP OS Collector process on server $self->{‘sys’}. Command \"$oscol_cmd\" returned $ret. | Cause: Failed to start the SAP OS Collector process on the given server. Action: Inspect the SAP OS Collector trace files (e.g., dev_coll) for more details. |
136193 | ERROR | $takeover_text of SAP HANA System Replication for SAP HANA database $self->{‘instance’} failed on server $node with exit code $ret. | Cause: Failed to register the given server as primary master for the given database in SAP HANA System Replication. Action: Inspect the SAP HANA trace files (e.g., nameserver_<hostname>.xxxxx.xxx.trc) for more details. |
136202 | EMERG | Failed to disable Autostart for SAP HANA instance $self->{‘instance’} on server $sys with exit code $remexec_ret. Please manually set \"Autostart = 0\" in the instance profile $profile on $sys. | Cause: The value of the Autostart parameter could not be modified in the given HDB instance profile on the given server. Action: Edit the HDB instance profile manually and set "Autostart = 0". |
136205 | ERROR | Failed start of SAP Start Service for SAP HANA database $instance on server $sys. | Cause: Failed to start SAP Start Service for the given SAP HANA database. Action: Inspect the SAP Start Service trace files (e.g., sapstartsrv.log) for more details. |
136208 | ERROR | LifeKeeper was unable to determine the SAP HANA System Replication mode for database $rem_obj->{‘instance’} on server $rem_obj->{‘sys’} while attempting to identify the previous primary replication site. Please resolve the issue and bring the SAP HANA resource in-service on the system where the database should be registered as primary master. | Cause: Failed to determine the SAP HANA System Replication mode on the given server. As a result, the previous primary replication site could not be identified. Action: Inspect the LifeKeeper log file for more details. |
136210 | ERROR | Failed start of SAP Start Service for SAP HANA database $rem_obj->{‘instance’} on server $rem_obj->{‘sys’}. Unable to stop the database on the server where it is currently registered as primary master. | Cause: Failed to start SAP Start Service for the given SAP HANA database on the given server. As a result, the database could not be stopped on the current primary SAP HANA System Replication site. Action: Inspect the SAP Start Service trace files (e.g., sapstartsrv.log) for more details. |
136212 | ERROR | Failed stop of SAP HANA database $rem_obj->{‘instance’} on server $rem_obj->{‘sys’} where it is currently registered as primary master. | Cause: Failed to stop the given SAP HANA database on the given server. Action: Inspect the SAP HANA trace files and LifeKeeper log file for more details. |
136217 | ERROR | Failed start of SAP HANA database $instance on server $sys. | Cause: Failed to start the given SAP HANA database on the given server. Action: Inspect the SAP HANA trace files and LifeKeeper log file for more details. |
136220 | ERROR | Unable to register $sys as a secondary SAP HANA System Replication site for database $instance. The host name of the current primary replication site was not provided. | Cause: The host name of the current primary SAP HANA System Replication site was not provided when attempting to register a secondary replication site. Action: Inspect the LifeKeeper log file for more details. |
136233 | EMERG | WARNING: A temporary communication failure has occurred between servers $self->{‘sys’} and $sys. Manual intervention is required in order to minimize the risk of data loss. To resolve this situation, please take one of the following resource hierarchies out of service: $self->{‘tag’} on $self->{‘sys’} or $eqv{$sys} on $sys. The server that the resource hierarchy is taken out of service on will become the standby server for SAP HANA database $self->{‘instance’}. | Cause: A temporary communication failure has caused the given equivalent HANA resources to both be brought in-service at the same time on their respective host servers. Action: Take the entire HANA resource hierarchy out of service on the server which should become the secondary replication site. Once the database has been stopped on that server, LifeKeeper will automatically register it as a secondary replication site during the next quickCheck cycle. |
136234 | EMERG | WARNING: SAP HANA database $self->{‘instance’} is running and registered as primary master on the following servers: $primary_node_list. Manual intervention is required in order to minimize the risk of data loss. To resolve this situation, please stop database $self->{‘instance’} on the standby server by running the command \‘su – $self->{‘sid_admin’} -c \"$SAP_CONTROL -nr $self->{‘instance_number’} -function StopWait $HANA_STOP_WAIT 5\"\’ on that server, allow LifeKeeper to register the standby server as a secondary replication site, then use LifeKeeper to bring resource $self->{‘tag’} in-service on the intended primary replication site. | Cause: The given SAP HANA database is running and registered as primary master on two cluster servers concurrently. Action: Use the command provided in the message to stop the database on the standby server. Once the database is stopped, LifeKeeper will automatically register the standby server as a secondary replication site. |
136236 | ERROR | Failed to remove $action_flag flag on server $node. | Cause: Failed to remove the !HANA_DATA_OUT_OF_SYNC_<HANA Tag> LifeKeeper flag on the given server. This will cause the given SAP HANA resource to fail to come in-service on the given server until it is removed. Action: Verify that LifeKeeper is running and fully initialized. If it can be verified that SAP HANA System Replication is in-sync, the out-of-sync flag can be removed manually with the command "/opt/LifeKeeper/bin/flg_remove -f ‘!HANA_DATA_OUT_OF_SYNC_<HANA Tag>’". |
136238 | ERROR | Unable to create SAP HANA object. Resource system name and tag name must be provided. | Cause: Either the server name or the resource tag name were missing while trying to create an SAP HANA object. Action: Inspect the LifeKeeper log file for more details. |
136239 | ERROR | Failed start of SAP Start Service for SAP HANA database $obj->{‘instance’} on server $obj->{‘sys’}. Unable to determine status of the database on $obj->{‘sys’}. | Cause: Failed to start SAP Start Service for the given SAP HANA database on the given server. As a result, the status of the database could not be determined. Action: Inspect the SAP Start Service trace files (e.g., sapstartsrv.log) for more details. |
136242 | ERROR | Unable to create SAP HANA object. At least one of the SAP HANA System Replication values is missing (site_name:$self->{‘site_name’}, site_id:$self->{‘site_id’}, repl_mode:$self->{‘repl_mode’}, oper_mode:$self->{‘oper_mode’}). | Cause: SAP HANA System Replication parameters could not be determined for the database on the given system. Action: Verify that SAP HANA System Replication is enabled and properly configured and that the database is running on all servers in the cluster. |
136243 | ERROR | Unable to locate the pingnfs utility ($pingnfs) on server $me. Please verify that this utility exists and is executable. | Cause: Unable to locate the pingnfs utility used for testing available of exported NFS shares. Action: Verify that the pingnfs utility exists in the given location and is executable. |
136245 | ERROR | Critical NFS shares being exported by server $sys ($export_list) are not currently available. Please verify that the NFS server is alive and that all NFS-related services are running. If necessary, the NFS_VERSION and NFS_RPC_PROTOCOL parameters can be set in /etc/default/LifeKeeper to control the NFS version and transport protocol used when checking accessibility of the NFS server. | Cause: The given critical NFS shared file systems are not currently available. Action: Verify that the NFS server exporting the file systems is alive and that all necessary NFS-related services are running. |
136248 | ERROR | Unable to open file $crit_mount_file on server $me. Please verify that this file exists and is read-enabled. | Cause: Unable to open the file containing mount information for critical NFS shares on the given server. Action: Verify that the file exists in the specified location and is read-enabled. |
136253 | ERROR | The SAP HANA \"takeover with handshake\" feature is available only for SAP HANA versions 2.0 SPS04 and greater. Database $self->{‘instance’} cannot be resumed. | Cause: Resuming a suspended database is not supported in SAP HANA versions earlier than 2.0 SPS04. Action: Upgrade to SAP HANA 2.0 SPS04 or later in order to use features related to "Takeover with Handshake". |
136254 | ERROR | Attempt to resume suspended primary database $self->{‘instance’} on server $self->{‘sys’} failed with exit code $ret. | Cause: The attempt to resume the suspended database instance on the given server failed. Action: Inspect the LifeKeeper and SAP log files to determine the cause of the failure. Either reattempt the operation or bring the corresponding LifeKeeper resource in-service on a different server. |
136258 | ERROR | Attempt to register server $node as a secondary SAP HANA System Replication site for database $self->{‘instance’} failed with exit code $ret. | Cause: The attempt to register the given server as a secondary SAP HANA System Replication site for the given database instance failed. Action: Inspect the LifeKeeper and SAP log files to determine the cause of the failure and manually correct any issues found. While the SAP HANA resource is in-service, LifeKeeper will automatically continue attempting to register the backup server in a secondary HSR role. |
136263 | ERROR | Usage: $usage | Cause: Invalid arguments in the SAP HANA resource restore script. Action: Please provide appropriate arguments in the form: -t <Resource Tag> -i <Resource ID> |
136265 | ERROR | Error getting resource information for $tag on server $me. | Cause: Failed to obtain information about the given SAP HANA resource on the given server. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136266 | ERROR | The contents of SAP HANA database $instance may not be in sync on $me. To protect the data, LifeKeeper will not restore $tag. Please restore the resource on the previous source server to allow the resync to complete. To force the resource in-service use \"lkcli hana force —sys $me —tag $tag\" or the GUI option \"Force In Service\". | Cause: SAP HANA System Replication was not in sync before attempting to bring the database resource in-service on the backup server. Action: Bring the SAP HANA resource in-service on the previous primary server and allow the resynchronization to complete. |
136275 | ERROR | Failed to determine SAP HANA System Replication mode for database $instance on server $me. | Cause: Failed to determine the SAP HANA System Replication mode for the given database on the given server. Action: Inspect the SAP HANA trace files (e.g., nameserver_<hostname>.xxxxx.xxx.trc) and the LifeKeeper log file for more details. |
136277 | ERROR | The resource $tag protecting SAP HANA database $instance has the last owner flag on $other->{‘sys’} indicating there are changes on $other->{‘sys’} that have not replicated to all systems. | Cause: There are multiple systems that have unsynchronized data. This is the result of systems having the SAP HANA resource in-service and not synchronizing the data to all standby systems. Action: Manually determine which system has the best data or the most up to data. The SAP HANA resource will have to be forced in-service on the system with the best data. |
136300 | WARN | Detected unsupported NFS version \"$found_nfs_vers\" for mount point $mount_loc from shared file system $mount_dev. The default NFS version specified by the pingnfs utility will be used instead. If necessary, this value can be overridden by setting the NFS_VERSION parameter in /etc/default/LifeKeeper. | Cause: The NFS version is not supported. LifeKeeper supports version 2, 3 and 4. Action: Use a supported NFS version. |
136301 | WARN | Detected unsupported NFS transport protocol \"$found_proto\" for mount point $mount_loc from shared file system $mount_dev. The default protocol specified by the pingnfs utility will be used instead. If necessary, this value can be overridden by setting the NFS_RPC_PROTOCOL parameter in /etc/default/LifeKeeper. | Cause: The NFS protocol is not supported. LifeKeeper supports tcp and udp. Action: Use a supported protocol. |
136302 | ERROR | Failed to create $action_flag flag on server $node. | Cause: The LifeKeeper HANA recovery kit was unable to create a flag. Action: Review the LifeKeeper logs and system logs for errors. |
136351 | ERROR | Usage: $usage | Cause: Invalid arguments in the SAP HANA resource quickCheck script. Action: Please provide appropriate arguments in the form: -t <Resource Tag> -i <Resource ID> |
136353 | ERROR | Error getting resource information for $tag. | Cause: Failed to obtain information about the given SAP HANA resource. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136354 | EMERG | The replication mode of the SAP HANA database $instance corresponding to resource $tag was modified outside of LifeKeeper and is no longer registered as primary master on server $me. Please bring the SAP HANA resource in-service on the server where the database should be registered as primary master. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The SAP HANA System Replication mode for the given database was modified outside of LifeKeeper. Action: Bring the SAP HANA resource in-service on the server where it should be registered as primary master. |
136363 | EMERG | LifeKeeper was unable to determine the SAP HANA System Replication mode for database $instance corresponding to resource $tag on server $me. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: Failed to determine the SAP HANA System Replication mode on the given server. Action: Inspect the LifeKeeper log file for more details. |
136375 | EMERG | An NFS server exporting a critical shared file system for resource $tag is currently unavailable. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: A critical NFS shared file system is currently unavailable. Action: Verify that the NFS server is alive and that all necessary NFS-related services are running. |
136376 | EMERG | WARNING: LifeKeeper resource $tag is designed for use in situations where SAP HANA System Replication (HSR) is disabled, but HSR was found to be enabled on server $me. Please ensure that the correct LifeKeeper resource type is being used for your current SAP HANA configuration. | Cause: The given LifeKeeper resource is designed to protect a SAP HANA database environment where HANA System Replication (HSR) is disabled, but HSR is currently enabled on the given server. Action: Verify that the correct LifeKeeper resource type is being used based for your SAP HANA configuration. If you have migrated from a SAP HANA configuration where HSR was disabled to one where it is enabled, the corresponding SAP HANA resource must be recreated in LifeKeeper. |
136377 | EMERG | SAP HANA database $instance corresponding to resource $tag is currently suspended on server $me due to actions performed outside of LifeKeeper. Please take the SAP HANA resource out of service on server $me and bring it in-service on the server where the database should be registered as primary master. Bringing resource $tag back in-service on $me will resume the suspended database. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The given SAP HANA database instance has been suspended on the given server due to actions performed outside of LifeKeeper. Action: If you would like to resume the suspended database on the server where the corresponding LifeKeeper resource is currently in-service (Active), execute the command given in the message. Otherwise, bring the LifeKeeper SAP HANA resource in-service on the intended primary replication site. |
136450 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA remove script. Action: Please provide appropriate arguments in the form: <Template Tag> <Template Id> |
136454 | ERROR | Error getting resource information for $tag. | Cause: Failed to obtain information about the given SAP HANA resource. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136456 | ERROR | Failed start of SAP Start Service for SAP HANA database $instance on server $me. | Cause: Failed to start SAP Start Service for the given SAP HANA database. Action: Inspect the SAP Start Service trace files (e.g., sapstartsrv.log) for more details. |
136459 | EMERG | WARNING: LifeKeeper resource $tag is designed for use in situations where SAP HANA System Replication (HSR) is disabled, but HSR was found to be enabled on server $me. Please ensure that the correct LifeKeeper resource type is being used for your current SAP HANA configuration. | Cause: The given LifeKeeper resource is designed to protect a SAP HANA database environment where HANA System Replication (HSR) is disabled, but HSR is currently enabled on the given server. Action: Verify that the correct LifeKeeper resource type is being used based for your SAP HANA configuration. If you have migrated from a SAP HANA configuration where HSR was disabled to one where it is enabled, the corresponding SAP HANA resource must be recreated in LifeKeeper. |
136462 | ERROR | Failed to remove flag \"${hana::HANA_FLAG_LEAVE_DB_RUNNING}_$tag\" on server $me. This may cause subsequent remove actions for resource $tag on server $me to unintentionally fail to stop the database. | Cause: Failed to remove the LifeKeeper !volatile!hana_leave_db_running_<tag> flag on the given server. Action: Manually remove the flag with the command "/opt/LifeKeeper/bin/flg_remove -f ‘!volatile!hana_leave_db_running_<tag>’" on the given server. While the flag exists, out-of-service operations for the SAP HANA resource on the given server will leave the protected database instance running. |
136525 | ERROR | $cmd::The name of the machine was not specified. | Cause: The comm_up script was called without a required parameter. Action: This is an internal error in LifeKeeper. |
136528 | ERROR | $cmd::HANA comm_up timed out, will not in-service AUTORES_ISP resources. | Cause: The HANA comm_up script will wait for LifeKeeper initialization (LK_INITDONE) to complete before restoring (in-service) AUTORES_ISP resources. This error indicates the initialization did not finish in the allotted time. Action: Review the logs to determine what may be causing the initialization to not finish in time. Modify HANA_STARTUP_TIMEOUT in /etc/default/LifeKeeper if additional time is needed. |
136550 | ERROR | Usage: $usage | Cause: Invalid arguments in the HANA resource recover script. Action: Provide both a valid HANA LifeKeeper resource tag name and resource ID. Usage: recover -d <tag> -n <id> |
136555 | ERROR | Error getting resource information for $tag on server $me. | Cause: Failed to obtain information about the given HANA resource on the given server. Action: Verify that the server is online, LifeKeeper is running, and the HANA resource exists. |
136556 | EMERG | The replication mode of the SAP HANA database $instance corresponding to resource $tag was modified outside of LifeKeeper and is no longer registered as primary master on server $me. Please bring the SAP HANA resource in-service on the server where the database should be registered as primary master. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The SAP HANA System Replication mode for the given database was modified outside of LifeKeeper. Action: Bring the SAP HANA resource in-service on the server where it should be registered as primary master. |
136558 | EMERG | LifeKeeper was unable to determine the SAP HANA System Replication mode for database $instance corresponding to resource $tag on server $me. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The SAP HANA System Replication mode could not be determined for the given database on the given server. Action: Inspect the SAP HANA trace files and LifeKeeper log file for more details. |
136559 | ERROR | Resource $tag is no longer ISP on server $me. Exiting $cmd for $tag. | Cause: The given SAP HANA resource is no longer ISP on the given server. Action: Inspect the LifeKeeper log file for more details. |
136650 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA hana_stop_all_dbs script. Action: Please provide appropriate arguments in the form: hana_stop_all_dbs -t <tag> |
136654 | ERROR | Error getting resource information for $tag. | Cause: Failed to obtain information about the given SAP HANA resource. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136658 | ERROR | Failed start of SAP Start Service for SAP HANA database $x->{‘instance’} on server $x->{‘sys’}. Could not determine status of SAP HANA DB on $x->{‘sys’}. | Cause: Failed to start SAP Start Service for the given SAP HANA database. Action: Inspect the SAP Start Service trace files (e.g., sapstartsrv.log) for more details. |
136661 | ERROR | Failed stop of SAP HANA database $x->{‘instance’} on server $x->{‘sys’}. | Cause: Failed to stop the given SAP HANA database on the given server. Action: Inspect the SAP HANA trace files and LifeKeeper log file for more details. |
136673 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA hana_takeover_with_handshake script. Action: Please provide appropriate arguments in the form: -t <tag> [-s <target server>] [-b] |
136674 | ERROR | Unable to remove the \"${hana::HANA_FLAG_LEAVE_DB_RUNNING}_$tag\" flag on server $isp_node. This may cause subsequent remove actions for resource $tag on server $isp_node to unexpectedly fail to stop the database. | Cause: Failed to remove the LifeKeeper !volatile!hana_leave_db_running_<tag> flag on the given server. Action: Manually remove the flag with the command "/opt/LifeKeeper/bin/flg_remove -f ‘!volatile!hana_leave_db_running_<tag>’" on the given server. While the flag exists, out-of-service operations for the SAP HANA resource on the given server will leave the protected database instance running. |
136675 | ERROR | Script $cmd exited unexpectedly due to signal \"$sig\" on server $me. This may leave the $tag resource hierarchy as well as SAP HANA System Replication in an unexpected state. Please verify that the cluster resources are in the expected state. | Cause: The hana_takeover_with_handshake script exited unexpectedly due to the given signal on the given server. Action: Verify that the SAP HANA cluster resources are in the expected state. If not, fix any issues that are found and bring the SAP HANA resource hierarchy in-service on the intended primary server. |
136677 | ERROR | Unable to find equivalent SAP HANA resource corresponding to $tag on server $target_node. | Cause: Unable to find an equivalent SAP HANA resource corresponding to the given resource on the given server. Action: Verify that the given resource tag is correct, the resource has been extended to the given target server, and that LifeKeeper is running and fully initialized on the target server. |
136678 | ERROR | Unable to obtain information about equivalent SAP HANA resource $tag on server $target_node. Verify that LifeKeeper is running and fully initialized. | Cause: Unable to find an equivalent SAP HANA resource corresponding to the given resource on the given server. Action: Verify that the given resource tag is correct, the resource has been extended to the given target server, and that LifeKeeper is running and fully initialized on the target server. |
136679 | ERROR | Resource $tag is not a SAP HANA resource. | Cause: The given resource is not a SAP HANA (database/hana) resource. Action: Verify that the resource tag is correct. |
136680 | ERROR | Resource $tag is designed for use in an environment where SAP HANA System Replication is disabled. Takeover with handshake cannot be performed for this resource type. Please use the standard \"In Service…\" command instead. | Cause: Features related to "Takeover with Handshake" may only be used in environments where SAP HANA System Replication is enabled. Action: Use the standard "In Service…" option to bring the SAP HANA resource in-service. |
136681 | ERROR | SAP HANA resource $tag is not currently in-service on any server in the cluster. The resource must be in-service and SAP HANA System Replication must be in-sync before performing a takeover with handshake. | Cause: The given SAP HANA resource is not in-service on any server in the cluster while attempting "Takeover with Handshake". Action: Bring the SAP HANA resource in-service on the intended primary server. |
136682 | ERROR | SAP HANA resource $tag is currently in-service on multiple servers: $isp_node_list. The resource must be in-service on only one server and SAP HANA System Replication must be in-sync before performing a takeover with handshake. | Cause: The given SAP HANA resource is in-service on multiple servers in the cluster while attempting a "Takeover with Handshake". Action: Take the resource out of service on every server except the one where it is intended to be registered as primary master. |
136684 | ERROR | Unable to create internal SAP HANA object for resource $tag on server $target_node. Verify that all necessary file systems are mounted and that LifeKeeper is running and fully initialized on $target_node. | Cause: Unable to create an internal hana object representing the given instance on the given server. Action: Verify that the database instance is properly installed and that all necessary file systems are mounted. |
136685 | ERROR | Takeover with handshake is only supported in SAP HANA versions 2.0 SPS04 and greater. The SAP HANA software must be upgraded in order to use this feature. Please use the standard \"In Service…\" command instead. | Cause: The "Takeover with Handshake" feature cannot be used when the underlying SAP HANA database version is less than 2.0 SPS04. Action: Upgrade to SAP HANA 2.0 SPS04 or later in order to use the "Takeover with Handshake" feature. |
136686 | ERROR | Takeover with handshake cannot be performed for database $target_obj->{‘instance’} on server $target_node because the database is not currently running and registered as primary on any other server in the cluster. | Cause: The given SAP HANA database instance is not running and registered as primary master on any server in the cluster during an attempted "Takeover with Handshake". Action: Bring the corresponding SAP HANA resource in-service on the intended primary server. |
136687 | ERROR | SAP HANA database $target_obj->{‘instance’} is running and registered as primary on more than one server in the cluster. Please resolve this situation and reattempt the takeover. | Cause: The given SAP HANA database instance is running and registered as primary on multiple servers during an attempted "Takeover with Handshake". Action: If the database instance is already running and registered as primary on the intended primary server, bring the corresponding LifeKeeper resource in-service on that server. If not, stop the database instance on every server except the one where it is currently in-service on LifeKeeper, allow LifeKeeper to resume system replication, then reattempt the takeover. |
136689 | ERROR | Unable to create internal SAP HANA object for resource $tag on server $isp_node. Verify that all necessary file systems are mounted and that LifeKeeper is running and fully initialized on $isp_node. | Cause: Unable to create an internal hana object representing the given instance on the given server. Action: Verify that the database instance is properly installed and that all necessary file systems are mounted. |
136690 | ERROR | Unable to set the ${hana::HANA_FLAG_LEAVE_DB_RUNNING}_$tag flag on server $isp_node. Aborting takeover with handshake attempt for database $target_obj->{‘instance’} on server $target_node. | Cause: Failed to set the !volatile!hana_leave_db_running_<tag> LifeKeeper flag on the given server during an attempted "Takeover with Handshake". Action: Inspect the LifeKeeper log file for more information. Correct any issues found and reattempt the takeover. |
136692 | ERROR | Takeover with handshake for database $target_obj->{‘instance’} failed on server $target_node. | Cause: The "Takeover with Handshake" attempt failed for the given SAP HANA database instance on the given target server. Action: If HANA_HANDSHAKE_TAKEOVER_FAILBACK=true in /etc/default/LifeKeeper, the SAP HANA resource hierarchy will automatically be brought back in-service on the previous database host server. Otherwise, the SAP HANA resource hierarchy must be manually brought in-service on the intended primary server. |
136695 | ERROR | Resource $tag does not exist on server $me. | Cause: The given resource does not exist on the given server. Action: Verify that the resource tag is correct. |
136696 | ERROR | Unable to verify the status of resource $tag on server $target_node. Assuming that it is not in-service. | Cause: Failed to determine the status of the given resource on the given server while checking whether the "Takeover with Handshake" was successful. Action: Verify that LifeKeeper is running and fully initialized on the given server and that the communication path between the local and target servers is active. If HANA_HANDSHAKE_TAKEOVER_FAILBACK=true in /etc/default/LifeKeeper, the SAP HANA resource hierarchy will automatically be brought back in-service on the previous database host server. Otherwise, the SAP HANA resource hierarchy must be manually brought in-service on the intended primary server. |
136697 | ERROR | Resource $res was not successfully brought in-service on server $target_node. | Cause: The given resource failed to come in-service on the given server during an attempted "Takeover with Handshake". Action: If HANA_HANDSHAKE_TAKEOVER_FAILBACK=true in /etc/default/LifeKeeper, the SAP HANA resource hierarchy will automatically be brought back in-service on the previous database host server. Otherwise, the SAP HANA resource hierarchy must be manually brought in-service on the intended primary server. |
136698 | ERROR | LifeKeeper is not running or is not fully initialized on server $me. | Cause: LifeKeeper is either not running or not fully initialized on the given server during an attempted "Takeover with Handshake". Action: Either start LifeKeeper with /opt/LifeKeeper/bin/lkstart or allow it to fully initialize, then reattempt the takeover. |
136699 | ERROR | Unknown server $target_node. | Cause: The given server host name is not recognized. Action: Verify that the server host name is correct and that communication paths have been created between the local server and the target server. |
136700 | ERROR | Usage: $usage | Cause: Invalid arguments provided to the SAP HANA remoteregisterdb script. Action: Please provide appropriate arguments in the form: remoteregisterdb -d <tag> -n <id> |
136705 | ERROR | Unable to obtain information about resource $tag on server $me. Exiting $cmd for $tag. | Cause: Failed to determine information about the given SAP HANA resource on the given server. Action: Verify that the given resource tag is correct, the resource exists on the given server, all necessary file systems are mounted, and that LifeKeeper is running and fully initialized on the given server. |
136706 | ERROR | Resource $tag is no longer ISP on server $me. Exiting $cmd for $tag. | Cause: The SAP HANA remoteregisterdb script is exiting because the SAP HANA resource is no longer in service (ISP) on the given server. Action: No action is required. |
136707 | ERROR | The $cmd event is intended for use only in environments in which SAP HANA System Replication is enabled. Exiting $cmd for $tag. | Cause: The SAP HANA remoteregisterdb event detected that SAP HANA System Replication is disabled for the database protected by the given SAP HANA resource. Action: Verify that SAP HANA System Replication is enabled on the server where the remoteregisterdb script was running. If necessary, System Replication may be enabled by executing the command ‘hdbnsutil -sr_enable —name=<Site Name>’ as the SAP HANA administrative user. |
136708 | ERROR | Error getting resource information for $tag on server $me. Exiting $cmd for $tag. | Cause: Failed to obtain information about the given SAP HANA resource on the given server. Action: Verify that a SAP HANA resource with the given tag exists on the given server. |
136709 | EMERG | LifeKeeper was unable to determine the SAP HANA System Replication mode for database $instance corresponding to resource $tag on server $me. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The SAP HANA System Replication mode could not be determined for the given database on the given server. Action: Inspect the SAP HANA trace files and LifeKeeper log file for more details. |
136710 | EMERG | The replication mode of the SAP HANA database $instance corresponding to resource $tag was modified outside of LifeKeeper and is no longer registered as primary master on server $me. Please bring the SAP HANA resource in-service on the server where the database should be registered as primary master. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The SAP HANA System Replication mode for the given database was modified outside of LifeKeeper. Action: Bring the SAP HANA resource in-service on the server where it should be registered as primary master. |
136711 | EMERG | SAP HANA database $instance corresponding to resource $tag is currently suspended on server $me due to actions performed outside of LifeKeeper. Please take the SAP HANA resource out of service on server $me and bring it in-service on the server where the database should be registered as primary master. Bringing resource $tag back in-service on $me will resume the suspended database. Resource monitoring for $tag will be suspended until the issue is resolved. | Cause: The given SAP HANA database instance has been suspended on the given server due to actions performed outside of LifeKeeper. Action: If you would like to resume the suspended database on the server where the corresponding LifeKeeper resource is currently in-service (Active), execute the command given in the message. Otherwise, bring the LifeKeeper SAP HANA resource in-service on the intended primary replication site. |
136732 | ERROR | There is no LifeKeeper protected resource with tag $tag on system $me. | Cause: The HANA canfailover script was called with a tag that is not a HANA resource. Action: No action is required. |
137000 | ERROR | PowerShell is not installed. | |
137001 | ERROR | PowerCLI is not installed. | |
137002 | ERROR | A valid network interface was not found. | |
137005 | ERROR | Failed to attach VMDK. | |
137010 | ERROR | Failed to detach VMDK. | |
137020 | ERROR | Failed to execute VMDK status checker daemon. | |
137030 | ERROR | Disk not specified. | |
137031 | ERROR | Cannot get disk uuid for $Disk. Please check your ESXi settings. | |
137032 | ERROR | PowerCLI failed. %s | |
137034 | ERROR | Cannot bring VMDK resource \"%s\" in service on server \"%s\". | |
137037 | ERROR | This system is not a VMware guest. | |
137040 | WARN | Unmanaged quickCheck daemon is running. | |
137041 | WARN | ps -wwf: $psout | |
137042 | WARN | kill -KILL $pid | |
137050 | ERROR | Failed to connect to ESXi server $addr. | |
137051 | ERROR | There is no ESXi server connected. | |
137055 | ERROR | Cannot determine ESXi VM ID because multiple network interfaces were found with the MAC address $MAC_ADDR. | |
137057 | ERROR | Usable SCSI controller not found. | |
137058 | ERROR | Cannot find VMDK with ID $UUID. | |
137059 | ERROR | This guest has snapshots present. | |
137060 | ERROR | The VMDK with ID $UUID cannot be attached to this guest. | |
137061 | ERROR | The virtual storage controller has an incompatible sharing mode configured. | |
137068 | ERROR | The VMDK detection failed. Retry count exceeded. | |
137070 | ERROR | Connect failed. | |
137071 | ERROR | Get-LocalVM failed. | |
137072 | ERROR | The VMDK has been detached remotely. This server has lost ownership. | |
137075 | ERROR | Cannot find virtual SCSI controller $CONTROLLER. | |
137076 | ERROR | The virtual storage controller has an incompatible sharing mode configured. | |
137077 | ERROR | Cannot find VM with MAC address $MAC_ADDR. | |
137078 | ERROR | Cannot find VM with MAC address $MAC_ADDR. | |
137101 | WARN | Partition information not defined for %s on %s. Retry. | |
137102 | ERROR | Partition information not defined for %s on %s. | |
137105 | ERROR | Device not specified. | |
137106 | ERROR | Cannot get device uuid for $Device. Please check your ESXi settings. | |
137107 | ERROR | %s is not shareable with any machine. | |
137111 | ERROR | Failed to create dependency \"%s\"-\"%s\" on machine \"%s\". | |
137112 | ERROR | Cannot bring VMDKP resource \"%s\" in service on server \"%s\". | |
138001 | ERROR | $line | Cause: A DRBD API command failed. The output from the API is logged in this message. Action: Following this message will be a specific message detailing any action needed. |
138002 | ERROR | Failed to execute DRBD API command "%s" with arguments "%s" for resource %s: %s | Cause: A DRBD API command that failed is listed along with the arguments used. Action: Following this message will be a specific message detailing any action needed. |
138003 | FATAL | $msg | Cause: Log the stack trace for this process. Action: None. |
138005 | ERROR | Unable to create DRBD object ($sys, $tag). | Cause: The DRBD object could not be created due to either $sys or $tag not being defined. Action: |
138006 | WARN | The "%s_wait_for_peer" flag is set in "%s/subsys/scsi/resources/drbd/" on system "%s". The resource is being forced online. | Cause: All systems are not available to determine where up-to-date data exists. The "Force Mirror Online" is being used to bring the DRBD resource online while all systems are not available. Action: None |
138007 | ERROR | The "%s_wait_for_peer" flag is set in "%s/subsys/scsi/resources/drbd/" on system "%s". To avoid using outdated data, LifeKeeper will not restore the resource until all systems are accessible. | Cause: All systems are not available to determine where up-to-date data exists. Action: Start LifeKeeper on all systems to allow LifeKeeper and DRBD to use the latest data available. Use the "Force Mirror Online’‘ option in the GUI or lkcli to bring the DRBD resource in-service immediately. WARNING: this may result in committed data loss if more up-to-date data is on a system that is down. |
138008 | ERROR | Failed to get resource information: %s. | Cause: LifeKeeper failed to retrieve the information from the resource database file. Action: Further information on the failure will be listed and additional error messages in /var/log/lifekeeper.log will provide more details on how to resolve. |
138009 | FATAL | UUID "%s" in "%s" does not match LifeKeeper ID "%s". | Cause: The UUID found in the resource database file does not match the ID in the LifeKeeper instance. Action: Bring the resource in-service on the other system in the cluster. If the in-service (restore) is successful, the ID’s in the resource DB file and LifeKeeper instance match. Synchronize the resource files and the ID in the instance on the failed system to the system where the in-service was successful. |
138010 | ERROR | Resource %s role on %s is "Primary". Set %s to "Secondary" using "drbdadm secondary %s" on %s or bring the resource in-service on %s. | Cause: An out of service (standby) system has the resource "up" with the role of "Primary". LifeKeeper expected the specified resource to be "Secondary". This is similar to a split-brain situation where there may be divergent data. Action: Determine which system has the latest or best data. If the standby system that is "Primary" does not have the latest data then manually stop all activity on that resource and set the state to "Secondary". Bring the resource in-service on the system with the latest or best data. |
138011 | ERROR | Resource %s on %s is "Diskless". Will try to re-attach by taking resource "down" and back "up". | Cause: The DRBD resource is "up" with the disk NOT attached. This may be due to an IO error or a manual action. LifeKeeper will try to attach the disk. Action: Review /var/log/lifekeeper.log and /var/log/messages to determine the cause of the volume not being attached. If due to an IO error replace the faulty disk. |
138012 | ERROR | Re-attach failed to repair "Diskless" Resource %s on %s. | Cause: LifeKeeper was not able to get the disk to attach to the resource. Action: Review DRBD driver messages logged in /var/log/messages to determine the cause of the failure. If due to an IO error replace the faulty disk. |
138013 | ERROR | Skipping connection to %s, %s is not configured for this resource. | Cause: There is a system configured in the DRBD resource file that is not configured for this resource in LifeKeeper. Action: Remove the unexpected system/connection using ‘lkcli drbd removesys -sys <sys> -tag <tag>’. |
138015 | WARN | Resource %s is not connected to %s. If %s has divergent data then it will not automatically connect and will require using the GUI "Resume Replication" option or "lkcli drbd resume —sys %s —tag %s" | Cause: The peer-role for the resource on the target is "Unknown". DRBD could not determine the state of the resource or consistency of the data. Action: When DRBD is able to connect to the target and determine the state of the resource it will automatically connect if there is NOT divergent data. If there is divergent data then use the command in the message to resume replication to the target. |
138016 | EMERG | Split-brain detected, %s is in-service and %s is in Primary role. Manual intervention is required in order to minimize the risk of data loss. To resolve this situation, take resource %s out-of-service on the server(s) with invalid data and resume replication using the GUI or lkcli. | Cause: The resource is in-service and "Primary" on multiple systems causing divergent data. Action: Take the resource out-of-service on the server(s) with invalid data and resume replication using the GUI or lkcli. |
138017 | WARN | Resource %s, target %s connection state is "%s". | Cause: The DRBD resource connection-state was expected to be "Connected". Action: None. LifeKeeper will automatically repair the connection. |
138018 | ERROR | Resource %s is not ready to use in %s seconds (see "DRBD_WAIT_FOR_USABLE_TIMEOUT" in /etc/default/LifeKeeper). | Cause: During restore of the resource the DRBD API ‘wait_for_usable’ is called to verify the DRBD driver has added the new resource and it is ready to use. More detailed errors from the DRBD API ‘wait_for_usable’ command should precede this message. Action: The default setting for DRBD_WAIT_FOR_USABLE_TIMEOUT is 30 seconds, which should be sufficient for the DRBD driver to configure the resource. Check /var/log/messages for further errors. Increase the default setting if necessary. |
138019 | WARN | DRBD detected the network connection between %s and %s is congested. The primary node %s will "pull ahead" of %s, temporarily going out of sync. When more bandwidth becomes available, replication will automatically resume and a background synchronization will take place | Cause: The change rate is higher than the network bandwidth available. Action: DRBD will automatically recover when bandwidth is available. Decrease change rate or increase bandwidth. |
138020 | WARN | Resource %s on target %s is "Diskless". | Cause: The DRBD resource is "up" on the target with the disk NOT attached. This may be due to an IO error or a manual action. LifeKeeper will try to attach the disk. Action: Review /var/log/lifekeeper.log and /var/log/messages to determine the cause of the volume not being attached. If due to an IO error replace the faulty disk. |
138021 | FATAL | "drbdadm down %s" on %s failed, this prevents forcing %s online: %s. | Cause: A "Force Mirror Online" requires taking down the resource on all other systems to be stopped (taken down) to force the correct synchronization. Action: Review /var/log/lifekeeper.log and /var/log/messages on the specified system to determine why the resource can not be stopped (taken down). Take down the resource on the specified system and retry the force online. |
138022 | ERROR | The underlying disk is mounted on %s and must be unmounted for DRBD to use the disk. Changes made directly to the underlying disk are not tracked by DRBD. A full resync will automatically occur when the resource is brought "up" and "connects", overwriting any changes made on %s. After unmounting the disk the "Force Resource Online (Primary)" GUI option or "lkcli drbd force —sys %s —tag %s" can force the resource online on %s where a full resync will occur to the other servers. | Cause: The underlying disk is directly mounted on the specified system. Action: Unmount the file system. A full resync will occur overwriting the data on the server that had the underlying disk mounted. To preserve the data on the server that has the underlying disk mounted, after unmounting the file system then use the “force online” to the server that had the underlying disk mounted. |
138023 | WARN | Resynchronization is in-progress for Resource %s (%s out-of-sync blocks). | Cause: Out of sync blocks are being synchronized. Action: None. |
138024 | ERROR | Failed to create UUID. | Cause: During create of a resource the DRBD API did not generate a UUID. Action: Check /var/log/messages for further messages from the DRBD driver. |
138025 | ERROR | Failed to write resource file %s. | Cause: During create of a resource the resource file could not be written. More detailed errors from the DRBD API that writes the file will precede this message. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check /var/log/messages for further errors. |
138026 | ERROR | Failed to store resource DB file %s. | Cause: The DRBD API ‘store’ failed. More detailed errors from the DRBD API that writes the file will precede this message. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check /var/log/messages for further errors. |
138027 | ERROR | Failed to create resource %s. | Cause: The ‘create_md’ DRBD API command failed. More detailed errors from the DRBD API ‘create_md’ command will precede this message. Action: Check /var/log/messages for further errors. |
138028 | WARN | Resynchronization is paused for Resource %s. | Cause: This may be due to a dependency on the completion of another synchronization process, or due to synchronization having been manually interrupted by ‘drbdadm pause-sync <res>’. Action: If manually paused then resume synchronization using ‘drbdadm resume-sync <res>’. |
138029 | ERROR | Resource %s is not ready to use in %s seconds (see "DRBD_WAIT_FOR_USABLE_TIMEOUT" in /etc/default/LifeKeeper). | Cause: During create of the resource the DRBD API ‘wait_for_usable’ is called to verify the DRBD driver has added the new resource and it is ready to use. More detailed errors from the DRBD API ‘wait_for_usable’ command should precede this message. Action: The default setting for DRBD_WAIT_FOR_USABLE_TIMEOUT is 30 seconds, which should be sufficient for the DRBD driver to configure the resource. Check /var/log/messages for further errors. Increase the default setting if necessary. |
138030 | WARN | Resynchronization is starting for Resource %s (transient state %s). | Cause: DRBD is in the transient state starting to resynchronize a resource. Action: None. |
138031 | ERROR | Failed to "up" resource %s on %s. | Cause: The DRBD in-service was not able to “up” the resource. This may be due to a failure on the underlying disk or the underlying disk is open by a process. Action: Further information on the failure will be listed in /var/log/messages. Resolve the errors and if the data has been modified on the underlying disk outside of DRBD then force a full resync using ‘drbdadm invalidate <res>’. |
138034 | ERROR | Failed to copy $self->{‘resource_file’} to $sys: $out | Cause: LifeKeeper synchronizes the DRBD resource files in /etc/drbd.d on all systems. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check the details provided in the message for the specific error. After correcting the error, retry the operation. |
138035 | ERROR | Unable to open file "%s". | Cause: The specified file could not be opened to determine the list of mounts. Action: Further information on the failure will be listed in /var/log/messages. Resolve the errors and retry the command. |
138038 | WARN | Remove %s on %s: resource on %s is in state %s. | Cause: The DRBD resource is being stopped when not ‘UpToDate’ on a target system. The DRBD resource will not be allowed in-service on the target unless the ‘UpToDate’ server is available. Action: Bring the resource in-service when the ‘UpToDate’ data is available. |
138039 | WARN | Remove $self->{‘tag’} on $self->{‘sys’}: resource is in state $res_status->{devices}0{‘disk-state’}. | Cause: The DRBD resource is being stopped with the in-service resource not "UpToDate". The DRBD resource will not be allowed in-service on this server unless the "UpToDate" server is available. Action: Bring the resource in-service when the "UpToDate" data is available. |
138041 | ERROR | Failed to set resource %s to secondary on %s. | Cause: LifeKeeper is stopping the resource by first setting the resource to "Secondary". If the resource is mounted, open or busy it will not be able to set the state to "Secondary". LifeKeeper will log a message if the resource is mounted. Action: Stop all activity to the resource. If the resource is mounted then unmount it. |
138043 | ERROR | "drbdadm down %s" failed on %s: %s. | Cause: LifeKeeper was not able to stop, or take down, the DRBD resource on the specified system. The log message will contain more details on the failure. Action: Check /var/log/messages on the specified system for more details. Manually stop the resource on the specified system using ‘drbdsetup down <resource>’. |
138045 | ERROR | Failed to "down" resource %s on %s. | Cause: LifeKeeper was not able to stop, or take down, the DRBD resource on the specified system. The log message will contain more details on the failure. Action: Check /var/log/messages on the specified system for more details. Manually stop the resource on the specified system using ‘drbdsetup down <resource>’. |
138051 | ERROR | Unable to open file "%s". | Cause: The specified file could not be opened to determine the list of mounts. Action: Further information on the failure will be listed in /var/log/messages. Resolve the errors and retry the command. |
138052 | ERROR | Failed to "invalidate" resource %s on %s to force a full resync. Replicated data may not be consistent or may be out-of-date. You should force a full resync using "drbdadm invalidate %s" on the system with inconsistent or out-of-date data. | Cause: The DRBD invalidate command failed. The specified server has the underlying device mounted. To make sure the data is valid a full resync is required. Action: Further information on the failure will be listed in /var/log/messages. Resolve the errors and force a full resync with the command listed in the message. |
138053 | ERROR | Failed to remove %s from the resource %s on %s. Run "lkcli drbd removesys —sys %s —tag %s" on %s to remove %s from the configuration. | Cause: During a "Delete Resource Hierarchy" the DRBD resource file was not updated properly to remove the system. Action: Check the errors preceding this message and /var/log/messages. Run the command listed in the message to retry the operation. |
138054 | ERROR | Unable to locate a server with resource %s to update configuration. Run "lkcli drbd removesys —sys %s —tag %s" on another server with %s configured to remove %s from the configuration. | Cause: During a "Delete Resource Hierarchy" the DRBD resource file was not updated properly to remove the system. Action: Check the errors preceding this message and /var/log/messages. Run the command listed in the message to retry the operation. |
138055 | ERROR | Tag %s is not a valid DRBD resource. | Cause: The tag passed to remove a system from the DRBD configuration was not for a DRBD resource. Action: Retry the command with the correct tag. |
138056 | ERROR | Tag %s is extended to %s, resource configuration can not be removed. | Cause: A system can only be removed from a configuration if the system is no longer extended to the system to be removed. Action: Use "Unextend Resource Hierarchy" in the GUI or lkcli to remove the system. |
138057 | ERROR | Failed \"drbdadm invalidate $resource —force\" on $sys: $out | Cause: The DRBD invalidate command failed. The specified server has the underlying device mounted. To make sure the data is valid a full resync is required. Action: Following this message will be a specific message detailing any action needed. |
138059 | ERROR | Failed to write resource file %s. | Cause: While removing a system from the DRBD resource configuration, the resource file could not be saved. Action: Check the errors preceding this message and /var/log/messages. Retry remove after correcting the errors. |
138060 | ERROR | Failed to store resource DB file %s. | Cause: While removing a system from the DRBD resource configuration, the resource DB file could not be saved. Action: Check the errors preceding this message and /var/log/messages. Retry remove after correcting the errors. |
138065 | ERROR | Failed to save original %s for %s: %s | Cause: While trying to recover from a failure the original resource file listed was not able to be saved. Action: Check the error listed in the message and check /var/log/messages. Retry failed operation. |
138066 | ERROR | Failed to copy $self->{‘resource_db_file’} to $sys: $out | Cause: LifeKeeper synchronizes the DRBD resource DB files in /etc/drbd.d on all systems Action: Check the details provided in the message for the specific error. After correcting the error, retry the operation. |
138067 | ERROR | Failed to update resource files for resource %s on %s. | Cause: Resource files were copied to the specified system in a temporary location and were to be updated to the final location. The update failed. Action: Check the errors preceding this message and /var/log/messages. Retry the operation that failed after correcting the errors. |
138068 | ERROR | System %s is not configured in resource file %s. | Cause: The specified system was being removed from the DRBD configuration but could not be found in the DRBD configuration for this resource. Action: Check that the system specified is the correct name listed in sys_list. |
138069 | ERROR | Failed to copy $file to $sys: $out | Cause: The LifeKeeper ‘lcdrcp’ utility failed when copying the file to the specified system. The error information is included in the message. Action: Check the error in the log message and /var/log/messages on the specified system. Correct any errors and retry the operation. |
138070 | ERROR | Failed to open and lock %s on %s: %s. | Cause: The resource database file is locked to prevent an unextend from the GUI and lkcli operations from making simultaneous operations. Action: Check the error listed in the message and check /var/log/messages. Retry failed operation. |
138071 | ERROR | Failed to open and lock %s on %s: %s. | Cause: The resource database file is locked to prevent an unextend from the GUI and lkcli operations from making simultaneous operations. Action: Check the error listed in the message and check /var/log/messages. Retry failed operation. |
138072 | ERROR | Failed to open and lock %s on %s: %s. | Cause: The resource database file is locked to prevent an extend from the GUI and lkcli operations from making simultaneous operations. Action: Check the error listed in the message and check /var/log/messages. Retry failed operation. |
138074 | ERROR | Tag %s is not a valid DRBD resource. | Cause: The tag passed in to update resource files is not for a DRBD resource. Action: Retry the command with the correct tag. |
138075 | ERROR | Failed to save original %s for %s: %s | Cause: While trying to recover from a failure the original resource database file listed was not able to be saved. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check the error listed in the message and check /var/log/messages. Retry failed operation. |
138078 | ERROR | Failed to update %s for %s: %s | Cause: The resource file could not be renamed. The system error is included in the message. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check the error listed in the message and check /var/log/messages. |
138079 | ERROR | Failed to update %s for %s: %s | Cause: The resource database file could not be renamed. The system error string is included in the message. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check the error listed in the message and check /var/log/messages. |
138083 | ERROR | ERROR: There is no equivalence to system %s that is configured in %s. Delete system %s using "lkcli drbd removesys —sys %s —tag %s" on %s | Cause: The DRBD resource configuration contains a system that the LifeKeeper instance is not extended to. Action: Remove the system configured in the DRBD resource configuration using the lkcli command supplied in the error message. |
138084 | ERROR | Resource %s is configured on 2 servers, the DRBD Recovery Kit is restricted to 2 servers. | Cause: The DRBD resource is restricted to 2 servers only. Action: None. |
138085 | ERROR | Tag %s is not a valid DRBD resource. | Cause: The tag passed in to update resource files not for a DRBD resource. Action: Retry the command with the correct tag. |
138086 | EMERG | Split-brain detected, %s servers (%s) are in-service. Manual intervention is required in order to minimize the risk of data loss. To resolve this situation, take resource %s out-of-service on the server(s) with invalid data and resume replication using the GUI or lkcli. | Cause: The resource is in-service on multiple systems causing divergent data most likely caused by a communication error. Action: Take the resource out-of-service on the server(s) with invalid data and resume replication using the GUI or lkcli. |
138087 | ERROR | Tag %s is not a valid DRBD resource. | Cause: The tag and ID passed to quickCheck are not for a DRBD resource. Action: Retry quickCheck with the correct tag and ID. |
138090 | ERROR | Resource %s is not "Primary" on %s. | Cause: The in-service DRBD resource is not in the "Primary" state. A local recovery will automatically try to make the resource "Primary". Action: None. |
138093 | ERROR | Resource %s on %s is "Diskless", check for IO errors on %s. Data will be accessed from a target. | Cause: An in-service DRBD resource is "Primary" with the volume not connected to storage, "Diskless". If "DRBD_ALLOW_DISKLESS" is set to 1 in /etc/default/LifeKeeper then the resource will remain in-service, accessing the data from the target. Action: Check /var/log/lifekeeper.log and /var/log/messages for errors on the resource and its volume. |
138094 | ERROR | Diskless mode not allowed (see DRBD_ALLOW_DISKLESS in /etc/default/LifeKeeper), switchover being initiated. | Cause: An in-service DRBD resource is "Primary" with the volume not connected to storage, "Diskless". If "DRBD_ALLOW_DISKLESS" is set to 0 in /etc/default/LifeKeeper a switchover to the standby system will occur. Action: Check /var/log/lifekeeper.log and /var/log/messages for errors on the resource and its volume. |
138095 | ERROR | Tag %s is not a valid DRBD resource. | Cause: The tag passed in to recover issues quickCheck found with a resource is not for a DRBD resource. Action: Check /var/log/lifekeeper.log and /var/log/messages for errors. |
138097 | FATAL | Pause requires tag ($tag) and server ($sys). | Cause: The tag and server is required to pause replication. Action: Specify the tag and server that should be paused. |
138098 | FATAL | Pause supports only one target. | Cause: Multiple servers are being paused. Action: Pause one server at a time. |
138099 | ERROR | Tag %s is not a valid DRBD resource on %s. | Cause: The tag passed in to pause replication is not for a DRBD resource. Action: Retry the command with the correct tag. |
138100 | ERROR | Failed to pause %s on %s. | Cause: The DRBD resource was not successfully paused on the specified system. Action: Check /var/log/lifekeeper.log and /var/log/messages on the specified system to determine the failure. |
138103 | ERROR | %s must be in-service to pause replication. | Cause: The DRBD resource is not in-service. Only in-service resources can be paused. Action: Bring the mirror in-service. |
138104 | ERROR | Failed to find the in-service server. | Cause: There are no servers with the resource in the "Primary" state. Action: Check /var/log/lifekeeper.log to find why the in-service resource is not in the "Primary" state. |
138105 | ERROR | Failed to pause resource %s. | Cause: The DRBD API "pause" operation failed. Action: The DRBD API to pause the mirror failed. Check the preceding error messages and /var/log/messages for the root cause. |
138106 | ERROR | Failed to create "paused" flag file on target "%s". Pause aborted. | Cause: LifeKeeper requires a "paused" flag to track when a resource has been paused. The file system that /opt/LifeKeeper is on may be read only or out of space. Action: Check /var/log/lifekeeper.log and /var/log/messages on the target to determine why the paused flag was not created. |
138107 | ERROR | Failed to initial_sync resource %s. | Cause: The DRBD API ‘initial_sync’ operation failed. To pause a DRBD resource the DRBD API ‘initial_sync’ is used to force the resource to ‘Primary’. Action: Check the preceding error messages and /var/log/messages for the root cause. |
138108 | ERROR | Resource %s is not ready to use in %s seconds (see "DRBD_WAIT_FOR_USABLE_TIMEOUT" in /etc/default/LifeKeeper). | Cause: During a pause operation, the DRBD API ‘wait_for_usable’ is called to verify the DRBD driver has added the resource and it is ready to use. Action: More detailed errors from the DRBD API ‘wait_for_usable’ command should precede this message. The default setting for DRBD_WAIT_FOR_USABLE_TIMEOUT is 30 seconds, which should be sufficient for the DRBD driver to configure the resource. Check /var/log/messages for further errors. Increase the default setting if necessary. |
138109 | ERROR | Resource %s is not "Primary" on %s. | Cause: During a local recovery the DRBD resource is not in the ‘Primary’ state as it should be. Action: None, local recovery will set the resource to ‘Primary’. |
138110 | ERROR | Failed to set resource %s to "Primary" on %s. | Cause: The DRBP API ‘primary’ failed during local recovery. Action: None. Local recovery will fail and initiate a switchover. |
138111 | ERROR | "drbdadm down %s" on %s failed: %s. | Cause: Local recovery was not able to ‘down’ the DRBD resource on $sys. Action: None. Local recovery will continue to repair the DRBD resource. |
138112 | FATAL | Resume requires tag ($tag) and server ($sys). | Cause: Resume replication to a disconnected target requires the tag of the resource and the server to connect. Both of these are not set. Action: Retry the operation with a specific tag and server to resume. |
138113 | FATAL | Resume supports only one target. | Cause: More than one target was selected to resume. Action: Resume one target at a time. |
138114 | ERROR | Tag %s is not a valid DRBD resource on %s. | Cause: The tag passed in to resume replication is not for a DRBD resource. Action: Retry the command with the correct tag. |
138115 | ERROR | Failed to resume %s to %s. | Cause: The resume operation failed on the target system listed. Action: Check /var/log/lifekeeper.log and /var/log/messages on the target system listed to determine why the resume failed. |
138117 | ERROR | "drbdadm adjust %s" on %s failed. This may cause connection issues for resource %s: %s. | Cause: The DRBD API ‘adjust’ will synchronize the configuration between all systems in the configuration. Action: Check /var/log/lifekeeper.log and /var/log/messages on the target system listed to determine why the adjust failed. |
138118 | ERROR | "drbdadm up %s" on %s failed: %s. | Cause: Local recovery was not able to ‘up’ the DRBD resource on the specified system. Action: None. Local Recovery will continue to repair the DRBD resource. |
138119 | ERROR | Resource %s is not ready to use in %s seconds (see "DRBD_WAIT_FOR_USABLE_TIMEOUT" in /etc/default/LifeKeeper). | Cause: During resume of the resource the DRBD API ‘wait_for_usable’ is called to verify the DRBD driver has added the new resource and it is ready to use. More detailed errors from the DRBD API ‘wait_for_usable’ command should precede this message. Action: The default setting for DRBD_WAIT_FOR_USABLE_TIMEOUT is 30 seconds, which should be sufficient for the DRBD driver to configure the resource. Check /var/log/messages for further errors. Increase the default setting if necessary. |
138120 | WARN | If %s has divergent data then it will not automatically connect and will require using the GUI "Resume Replication" option or "lkcli drbd resume —sys %s —tag %s" | Cause: The peer-role for the resource on the target is ‘Unknown’. DRBD could not determine the state of the resource or consistency of the data. Action: When DRBD is able to connect to the target and determine the state of the resource it will automatically connect if there is NOT divergent data. If there is divergent data then use the command in the message to resume replication to the target. |
138121 | ERROR | Failed to set resource %s to secondary on %s. | Cause: Resume on the specified system failed to set the resource to ‘secondary’. This is most likely due to the resource on the specified system being open or busy. Action: More detailed errors from the DRBD API that writes the file will precede this message. Check /var/log/messages on the specified system for further errors. |
138123 | ERROR | Failed to repair "Diskless" Resource %s on %s. | Cause: Resource is ‘up’ with the disk NOT attached. This may be due to an IO error or a manual action. Action: Check /var/log/messages on the specified system for further errors. |
138124 | ERROR | Failed to find the in-service server. | Cause: Resume replication requires a system in the cluster to be in-service and in the ‘Primary’ state. Action: Check /var/log/lifekeeper.log and /var/log/messages on the in-service system for errors. |
138125 | ERROR | "drbdadm adjust %s" on %s failed. This may cause connection issues for resource %s: %s. | Cause: The DRBD API ‘adjust’ will synchronize the configuration between all systems in the configuration. Action: Check /var/log/lifekeeper.log and /var/log/messages on the specified system to determine why the ‘adjust’ failed. |
138126 | ERROR | Resource $self->{‘tag’} must be paused on $self->{‘sys’} in order to enable access on the target. | Cause: The specified system is in-service or the ‘paused’ flag does not exist on the specified system. Action: Check /var/log/lifekeeper.log and /var/log/messages on the specified system to determine why the ‘paused’ flag failed. |
138127 | ERROR | Parent file system resource not found. | Cause: A file system resource could not be found in the hierarchy. Action: Verify there is a file system resource that is a parent of the DRBD resource. If there is a parent file system resource then check /var/log/lifekeeper.log for errors. |
138128 | ERROR | Parent file system resource is mounted on $dev that is not the DRBD device $self->{‘drbd’}. | Cause: The file system resource is not mounted on the correct DRBD device node. Action: Unmount the device listed in the error message, ‘unmount /dev/drbd<num>’. Retry pause. |
138129 | ERROR | Failed to set up temporary read/write access to data for $self->{‘tag’}. Error: $ret | Cause: The parent file system was not successfully mounted. Action: Check /var/log/lifekeeper.log and /var/log/messages on the local system to determine why the mount failed. |
138131 | ERROR | Parent file system resource not found. | Cause: A file system resource could not be found in the hierarchy. Action: Verify there is a file system resource that is a parent of the DRBD resource. If there is a parent file system resource then check /var/log/lifekeeper.log for errors. |
138132 | ERROR | Failed to undo temporary access for $self->{‘tag’} on $self->{‘sys’}. Error: $ret. Please verify that $fsid is not mounted on server $self->{‘sys’}. | Cause: The resume of a paused DRBD resource failed to unmount the file system on the specified system. A process that is not killable may have a file open on the file system. Action: Unmount the file system with ‘umount <mount_point>’. If this fails check /var/log/messages for errors. |
138133 | ERROR | %s must be in-service to resume replication. | Cause: A DRBD resource must be in-service on a system in the cluster before replication can be resumed. Action: Bring the resource in-service on a system and retry your resume. |
138134 | ERROR | Split-brain detected, %s servers are in-service. Take the resource out-of-service on the server with invalid data. | Cause: During a ‘resume’ operation a split-brain was detected where more than one system has a DRBD resource in-service. Action: Determine which system has the latest or best data. Take the hierarchy with DRBD out-of-service on the system that is in-service that does NOT have the latest data. Make sure the DRBD resource with the latest or best data is in-service. Resume replication to all systems. |
138135 | ERROR | Split-brain detected, %s servers are in-service. Take the resource out-of-service on the server with invalid data. | Cause: During a ‘pause’ operation a split-brain was detected where more than one system has a DRBD resource in-service. Action: Determine which system has the latest or best data. Take the hierarchy with DRBD out-of-service on the system that is in-service that does NOT have the latest data. Make sure the DRBD resource with the latest or best data is in-service. Resume replication to all systems. |
138136 | ERROR | Pause does not support pausing the in-service server. | Cause: The system that is in-service is being paused. Action: The ‘pause’ operation should be run on the DRBD resource that is not in-service. |
138137 | ERROR | Resume does not support resuming the in-service server. | Cause: The ‘resume’ operation is being run on the in-service system. Action: The ‘resume’ operation should be run on the DRBD resource that is ‘paused’, aka not in-service. |
138138 | FATAL | Tag %s is paused on server %s, a force online is required to in-service a paused target. Use "Force Resource Online (Primary)" option in the GUI or "lkcli drbd force —sys %s —tag %s". | Cause: A ‘restore’ operation was done on a DRBD resource that is in the ‘paused’ state. Action: A ‘force online’ is required to in-service a paused resource. Use the command in the message to force the resource in-service. |
138139 | ERROR | Tag %s is not a valid DRBD resource. | Cause: The tag passed to ‘lkcli’ was not for a DRBD resource. Action: Retry the command with the correct tag. |
138140 | ERROR | LifeKeeper is not ALIVE on server %s, all servers must be ALIVE and at least 1 communication path UP to update options. | Cause: All systems must have LifeKeeper running and communication paths up before using ‘lkcli’. Action: Make sure LifeKeeper is running on all systems and communication paths are up. |
138141 | ERROR | Resource is in split-brain, resolve split-brain before modifying options. | Cause: During a ‘lkcli’ command a split-brain was detected where more than one system has a DRBD resource in-service. Action: Determine which system has the latest or best data. Take the hierarchy with DRBD out-of-service on the system that is in-service that does NOT have the latest data. Make sure the DRBD resource with the latest or best data is in-service. Resume replication to all systems. After resolving the split-brain the ‘lkcli’ command can be retried. |
138142 | ERROR | Failed to update option %s for %s on %s. | Cause: The ‘lkcli’ command failed on the specified system. Action: Check /var/log/lifekeeper.log and /var/log/messages on the specified system to determine why the ‘lkcli’ command failed. |
138146 | ERROR | Failed to validate entry "%s": %s | Cause: The DRBD API ‘validate_drbd_option’ returned that the entry passed in was not a valid entry. Action: Valid entries and values are listed in ‘lkcli drbd options -h’. |
138147 | ERROR | Failed to get XML config information for resource %s. | Cause: The ‘drbdadm dump-xml <res>’ command failed. Action: Check /var/log/lifekeeper.log and /var/log/messages to determine why the command failed. |
138148 | ERROR | Both host1 (%s) and host2 (%s) must be defined. | Cause: When making changes to the connections in a DRBD resource with the ‘lkcli’ both hosts for the connection must be defined. Action: See the ‘lkcli drbd options’ section of the DRBD User’s Guide for more information on required parameters. |
138149 | ERROR | Failed %s %s: %s | Cause: The DRBD API command listed failed causing the ‘lkcli’ command to fail. Action: More detailed errors from the DRBD API call will precede this message. Check /var/log/messages for further errors. |
138150 | ERROR | Failed to find connection for host1 (%s) and host2 (%s). | Cause: The DRBD API ‘get_connection’ was not able to find a connection for the hosts provided in the DRBD configuration file. Action: Run ‘drbdadm status’ to see the list of connections for each resource. Each host name or system listed is the same format as the systems configured in LifeKeeper. Use sys_list(1M) to see the list of systems configured in LifeKeeper. |
138151 | ERROR | Failed %s %s: %s | Cause: The DRBD API for the specified command failed causing the ‘lkcli’ command to fail. Action: More detailed errors from the DRBD API will precede this message. Check /var/log/messages for further errors. |
138152 | ERROR | Failed %s %s: %s | Cause: The DRBD API command listed failed. Action: More detailed errors from the DRBD API will precede this message. Check /var/log/messages for further errors. |
138153 | ERROR | Failed to write resource file %s. | Cause: The resource file could not be written. The file system that /etc/drbd.d is on may be read only or out of space. Action: More detailed errors from the DRBD API ‘write_resource_file’ will precede this message. Check /var/log/messages for further errors. |
138154 | ERROR | Failed to write resource DB file %s. | Cause: The DRBD API ‘store’ failed. The file system that /etc/drbd.d is on may be read only or out of space. Action: More detailed errors from the DRBD API ‘store’ will precede this message. Check /var/log/messages for errors on the target system. |
138155 | ERROR | Failed to copy resource files to %s. | Cause: Resource files were not copied to the specified system. Action: More detailed errors from the copy failure will precede this message. |
138157 | ERROR | Failed to "adjust" resource %s after copying original files back. | Cause: The DRBD API ‘adjust’ will synchronize the configuration between all systems in the configuration. Action: Check /var/log/lifekeeper.log and /var/log/messages on the local system to determine why the adjust failed. |
138158 | ERROR | Failed to copy original resource files to %s. | Cause: While copying configuration file a failure occurred and while the original resources were being restored there was a failure. Action: DRBD requires that all resources files are identical on each server. The local server most likely has the most up to date or best files. Copy the most up to date or best files to all servers. Run ‘drbdadm adjust <res>’ on each server to synchronize the configuration. |
138159 | ERROR | Failed to validate entry "%s", value %s: %s | Cause: The DRBD API ‘validate_drbd_option’ returned that the entry or value was not valid. Action: Valid entries and values are listed in ‘lkcli drbd options -h’. |
138160 | ERROR | Failed to find volume. | Cause: During a ‘lkcli drbd options’ command where the ‘entry’ changed the volume of a resource, the volume was not found in the configuration. The file system that /etc/drbd.d is on may be read only or out of space. Action: The configuration file may be corrupted. |
138161 | ERROR | "-v" requires an entry in the "disk" section, however entry "%s" is for section "%s". | Cause: The ‘lkcli drbd options -v’ is for an ‘entry’ that is not intended for the ‘disk’ section of a resource configuration. Action: Valid entries and values are listed in ‘lkcli drbd options -h’. |
138162 | ERROR | "-c" requires an entry in the "net" section, however entry "%s" is for section "%s". | Cause: The ‘lkcli drbd options -c’ is for an ‘entry’ that is not intended for the ‘net’ section of a resource configuration. Action: Valid entries and values are listed in ‘lkcli drbd options -h’. |
138163 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource file failed. Action: Check the error listed in the message and check /var/log/messages. |
138164 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource database file failed. Action: Check the error listed in the message and check /var/log/messages. |
138165 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource file failed. Action: Check the error listed in the message and check /var/log/messages. |
138166 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource database file failed. Action: Check the error listed in the message and check /var/log/messages. |
138167 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource file failed. Action: Check the error listed in the message and check /var/log/messages. |
138168 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource database file failed. Action: Check the error listed in the message and check /var/log/messages. |
138169 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource file failed. Action: Check the error listed in the message and check /var/log/messages. |
138170 | ERROR | Failed to recover original %s for %s: %s | Cause: After an error the rename of the original resource database file failed. Action: Check the error listed in the message and check /var/log/messages. |
138171 | ERROR | Failed: drbdsetup xml-help %s: %s | Cause: The ‘drbdsetup xml-help <res>’ command failed. Action: The failure output is included in the message. |
138172 | ERROR | Failed: drbdsetup xml-help %s: %s | Cause: The ‘drbdsetup xml-help <group>’ command failed. Action: The failure output is included in the message. |
138202 | ERROR | cmd::Usage: $usage | Cause: The usage ‘restore -t <tag> -i <id>’. The <tag> or <id> are not defined. Action: Retry restore with <tag> and <id>. |
138204 | ERROR | cmd:$tag:Error getting resource information for $tag on server $drbd::me. | Cause: The DRBD restore was not able to match the $tag with a DRBD resource. Action: Verify that $tag is a DRBD resource on server $sys and that the ID on the command line matches the resource ID. |
138212 | ERROR | cmd:$tag:Usage: $usage | Cause: The usage is ‘remove -t <tag> [-i <id>] [-d]’. The <tag> is not defined. Action: Retry remove with a valid DRBD tag. |
138214 | ERROR | cmd:$tag:Error getting resource information for $tag on server $drbd::me. | Cause: The DRBD remove was not able to match the $tag with a DRBD resource. Action: Verify that $tag is a DRBD resource on server $sys and if the ID was passed in that it matches the resource ID. |
138222 | ERROR | cmd:$tag:Usage: $usage | Cause: The usage is ‘delete -t <tag> -i <id> [-U]’. The <tag> or <id> are not defined. Action: Retry delete with <tag> and <id>. |
138224 | ERROR | cmd:$tag:Error getting resource information for $tag on server $drbd::me. | Cause: The DRBD delete was not able to match the $tag with a DRBD resource. Action: Verify that $tag is a DRBD resource on server $sys and that the ID on the command line matches the resource ID. |
138230 | ERROR | cmd::Usage: %s <event>, event is not defined. | Cause: DRBD event handler called drbd_event without an <event>. Action: Verify ‘handlers’ defined in the DRBD resource configuration are valid. |
138231 | ERROR | cmd::Environment variable DRBD_RESOURCE not found (this is normally passed in by drbdadm). | Cause: The DRBD event handler did not set the DRBD_RESOURCE environment variable. Action: Verify ‘handlers’ defined in the DRBD resource configuration are valid. |
138232 | ERROR | cmd::Failed to retrieve %s. | Cause: The drbd_event utility to handle DRBD events was not able to retrieve or open the resource database file that matches the resource. Action: Verify ‘handlers’ defined in the DRBD resource configuration are valid. |
138233 | ERROR | cmd::Failed to find \"LifeKeeper UUID\" in the resource DB file %s. | Cause: The resource database file does not contain the LifeKeeper UUID. Action: Verify the configuration file is not corrupt by comparing with the other systems it is extended with. |
138234 | ERROR | cmd:$tag:DRBD detected a split-brain on resource $resource, data resynchronization will not occur. MANUAL INTERVENTION IS REQUIRED. In order to initiate data resynchronization, you should take tag $tag out of service on $drbd::me or $peer. If replication does not automatically resume use the resume option in the GUI or lkcli to force replication to resume. | Cause: DRBD event handler detected multiple systems with a resource in the ‘Primary’ state. Action: Follow instructions in the message. |
138235 | ERROR | cmd:$tag:DRBD detected \"out-of-sync\" blocks on resource $resource. | Cause: DRBD event handler reported block out-of-sync. This may be caused by a resync, replication not being able to keep up with the change rate, a verify miscompare, etc. Action: Increase network bandwidth or decrease change rate if the change rate is exceeding the network bandwidth. DRBD will automatically synchronize the out-of-sync block. |
138236 | WARN | cmd:$tag:DRBD is beginning to resync (source) resource $resource. | Cause: DRBD event handler reports that a resync has begun on the source of the resynchronization. Action: None. |
138237 | WARN | cmd:$tag:DRBD is beginning to resync (target) resource $resource. | Cause: DRBD event handler reports that a resync has begun on the target. Action: None. |
138240 | ERROR | cmd:$tag:DRBD detected an IO error on resource $resource on $drbd::me. | Cause: DRBD event handler reported an IO error on the volume of the DRBD resource. This will cause the resource to become ‘diskless’. Action: Check /var/log/messages for details on the IO error. LifeKeeper will automatically reattach the disk. If the IO error persists, replace the faulty drive. |
138241 | ERROR | cmd::Failed to find instance for ID %s. | Cause: The ‘LifeKeeper_UUID’ entry in the resource file does not match any LifeKeeper resource currently configured. Action: The event is for a device no longer configured in LifeKeeper. Restore the LifeKeeper configuration from a lkbackup or remove the DRBD resource and re-add it with LifeKeeper. |
138330 | ERROR | cmd::Usage: $usage | Cause: The usage is ‘quickCheck -t <tag> -i <id>’. The <tag> or <id> are not defined. Action: Retry quickCheck with <tag> and <id>. |
138400 | FATAL | cmd::Insufficient input parameters.\n%s | Cause: The usage is ‘create -t <tag> -n <mount> -f <fs_tag> -y <fs_type> -p <local_part> -s <switchback>’. All required parameters were not provided. Action: Retry create with all parameters defined properly. |
138401 | ERROR | cmd:$tag:END failed create of %s on system %s with signal $sig. | Cause: The LifeKeeper DRBD create failed. More detailed errors will precede this message. Action: Resolve errors and retry. |
138405 | FATAL | cmd:$tag:The requested filesystem type "%s" is not supported by LifeKeeper. LifeKeeper supported file system types are listed under "%s/lkadm/subsys/gen/filesys/supported_fs". | Cause: An unsupported file system type was entered. Action: Retry with a supported file system type. |
138406 | FATAL | cmd:$tag:Failed to find a unique disk name for $local_part. The DRBD kit requires devices with unique disk names in \"/dev/disk/by-\*\". | Cause: The LifeKeeper DRBD recovery kit requires the device to be used as a volume disk have a unique disk name listed in "/dev/disk/by-*". Action: Select a disk or partition that has a unique disk name. Partitions with a GUID will have a unique name, see gdisk(8) as one way to create a GUID name. |
138408 | FATAL | cmd:$self->{‘tag’}:Failed restore of tag %s on system %s | Cause: The LifeKeeper restore operation failed. Action: More detailed errors will precede this message. |
138410 | FATAL | cmd:$tag:Cannot make the %s filesystem on "%s" (%d) | Cause: The ‘mkfs’ command failed with the error code listed. Additional error information will be logged following this message. Action: Check /var/log/messages for more details on the error. |
138411 | FATAL | cmd:$tag:%s | Cause: The ‘mkfs’ command failed and this log message is the internal perl error. Action: None. |
138412 | FATAL | cmd:$tag:Cannot mount %s on "%s" (%d) | Cause: The ‘mount’ command failed with the error code listed. Additional error information will be logged following this message. Action: Check /var/log/messages for more details on the error. |
138413 | FATAL | cmd:$tag:%s | Cause: The ‘mount’ command failed and this log message is the internal perl error. Action: None. |
138414 | FATAL | cmd:$tag:Cannot create filesys hierarchy "%s". | Cause: The LifeKeeper /opt/LifeKeeper/bin/creFShier utility failed to create the file system resource. Action: More detailed errors will precede this message. |
138453 | ERROR | cmd:$target_tag:END failed extend of resource %s on system %s with signal $sig. | Cause: The LifeKeeper extend failed. Action: More detailed errors will precede this message. |
138454 | FATAL | cmd:$target_tag:Resource %s is configured on 2 servers, the DRBD Recovery Kit is restricted to 2 servers. | Cause: An attempt was made to extend a DRBD resource to a third server. Action: None. |
138456 | FATAL | cmd:$target_tag:Failed to find a unique disk name for $target_part. The DRBD kit requires devices with unique disk names in \"/dev/disk/by-\*\". | Cause: The LifeKeeper DRBD recovery kit requires the device to be used as a volume disk have a unique disk name listed in "/dev/disk/by-*". Action: Select a disk or partition that has a unique disk name. Partitions with a GUID will have a unique name, see gdisk(8) as one way to create a GUID name. |
138457 | FATAL | cmd:$target_tag:IPv6 detected, this is not supported at this time. | Cause: The selected network is IPV6. Action: Use IPV4 network. |
138458 | ERROR | cmd:$target_tag:Invalid protocol setting %s, valid settings are "Asynchronous/Synchronous/Memory Synchronous". | Cause: The LifeKeeper DRBD recovery kit only supports ‘asynchronous’, ‘asynchronous’, and ‘memory synchronous’ protocols. Action: Enter the correct protocol. |
138460 | FATAL | cmd:$target_tag:Failed to copy $template_self->{‘resource_db_file’} from $template_sys. | Cause: The extend of a DRBD resource was not able to copy the resource database file from the template system to the target. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check /var/log/lifekeeper.log and /var/log/messages on the template and target system to determine why the copy failed. Resolve the problem and retry extend. |
138464 | FATAL | cmd:$target_tag:Failed to write resource file %s. | Cause: The extend of a DRBD resource could not write the resource file to the target system. The file system that /etc/drbd.d is on may be read only or out of space. Action: Check /var/log/lifekeeper.log and /var/log/messages on the target system to determine why the write failed. Resolve the problem and retry extend. |
138465 | FATAL | cmd:$target_tag:Failed to store resource DB file %s. | Cause: The DRBD API ‘store’ failed. The file system that /etc/drbd.d is on may be read only or out of space. Action: More detailed errors from the DRBD API ‘store’ will precede this message. Check /var/log/messages for errors on the target system. Retry extend when errors are resolved. |
138466 | FATAL | cmd:$target_tag:Failed to create resource %s. | Cause: The DRBD API ‘create_md’ failed. Action: Check /var/log/lifekeeper.log and /var/log/messages on the target system for errors. |
138468 | FATAL | cmd:$target_tag:Failed to update the configuration for resource %s on %s. | Cause: The extend of a DRBD resource was not able to copy the resource files to the template system. The resource files must be synchronized on all systems. Action: Check /var/log/lifekeeper.log and /var/log/messages on the target and template systems for errors. |
138469 | ERROR | cmd:$target_tag:Failed to remove %s from the resource %s on %s. Run "lkcli drbd removesys —sys %s —tag %s" on %s to remove %s from the configuration. | Cause: The LifeKeeper instance create failed during the extend of a DRBD resource. During cleanup of the DRBD resource configuration the system specified was not removed from the resource database file. Action: Run the command listed in the message to remove the system from the DRBD configuration. This will be necessary before retrying the extend. |
138470 | FATAL | cmd:$target_tag:Error creating resource $target_tag on system $target_sys. | Cause: The extend of a DRBD resource was not able to create the LifeKeeper instance on the target server. Action: Check /var/log/lifekeeper.log for errors. |
138500 | ERROR | cmd:$template_tag:Cannot extend resource "%s" to server "%s". | Cause: The extend of a DRBD resource to the server listed failed. More detailed errors will precede this message. Action: Check /var/log/messages for errors on the target system. Retry extend when errors are resolved. |
138501 | ERROR | cmd:$template_tag:Invalid parameter. %s. | Cause: The usage is ‘canextend <template server> <template tag>’. The ‘template server’ or ‘template tag’ are not defined. Action: Retry the extend of the DRBD resource with the correct template server name and the template tag for the DRBD resource. |
138502 | ERROR | cmd:$template_tag:The target and the template server are the same. Please specify the correct value for the template server. %s | Cause: During the extend of a DRBD resource the canextend utility was run on the target. Action: Run the extend of the DRBD resource on the target. |
138503 | ERROR | cmd:$template_tag:Failed to load the drbd module. %s | Cause: During the extend of a DRBD resource ‘modprobe drbd’ failed. The error from modprobe is included in the message. Action: Verify ‘kmod-drbd’ package is installed for the kernel running on the target. |
138550 | WARN | $REM_MACH is not alive, create flag ${FLAGTAG}_wait_for_peer. | Cause: During startup of LifeKeeper the system listed did not establish communications. LifeKeeper will not automatically resume or restore DRBD resources until all servers are available to assure the most up-to-date data is available. Action: Make sure LifeKeeper is running on all systems and resolve any communication issues. |
138561 | ERROR | Failed to "down" resource %s: %s | Cause: The DRBD API ‘down’ failed. The output from the API is included in the message. Action: Unmount the file system if it is mounted on the DRBD resource. Stop or kill any process that has the DRBD resource open. Run ‘drbdadm down <res>’ when the processes are stopped and the file system unmounted. |
138570 | FATAL | cmd::The name of the machine was not specified. | Cause: The machine name was not provided to the comm_up script. Action: Check /var/log/lifekeeper.log for errors. |
138600 | ERROR | cmd::Usage: $usage | Cause: The usage is ‘recover -d <tag> -n <id>’. The <tag> or <id> are not defined. Action: Check /var/log/lifekeeper.log for further error messages. |
139000 | ERROR | Cannot find the \"oci\" command in directories of the PATH. Please confirm that it is installed and the PATH is set correctly. | Cause: Unable to get PATH for "oci" command. Action: Make sure that the "oci" command is installed and that the installation directory for the "oci" command is added to your PATH. |
139012 | ERROR | Cause: Failed to execute the oci command. Action: Configure the oci command to end successfully based on the status code and message. |
|
139013 | ERROR | Failed to assign $ip, error:\"$result\" | Cause: Failed to allocate $ip. Action: Resolve the cause of the error based on the $result details. |
139022 | ERROR | Cause: Failed to execute the oci command. Action: Configure the oci command to end successfully based on the status code and message. |
|
139023 | ERROR | Failed to unassign $ip, error:\"$result\" | Cause: Failed to allocate $ip. Action: Resolve the cause of the error based on the $result details. |
139032 | ERROR | Cause: Failed to execute the oci command. Action: Configure the oci command to end successfully based on the status code and message. |
|
139033 | ERROR | Failed to \"$ocicmdstr\", unknown error:\"$ip_list\" | Cause: The oci command terminated abnormally due to a cause other than code 139032. Action: Unknown error: Please take action based on the "$ip_list". |
139034 | ERROR | There is no secondary IPs on $device. | Cause: No secondary IP address has been assigned to $device. Action: If local recovery is enabled, make sure that local recovery was performed and therefore it recovered from the failure. |
139035 | ERROR | $ip is not assigned to $device. | Cause: $ip is not assigned to $device. Action: If local recovery is enabled, make sure that local recovery was performed and therefore it recovered from the failure. |
139036 | ERROR | $@ | |
139037 | ERROR | $@ | |
139042 | ERROR | Cause: Failed to execute the oci command. Action: Configure the oci command to end successfully based on the status code and message. |
|
139043 | ERROR | Failed to assign $ip, error:\"$result\" | Cause: Failed to allocate $ip. Action: Resolve the cause of the error based on the $result details. |
139060 | ERROR | $cmd is invalid. | Cause: $cmd is invalid. Action: Please contact us. |
139061 | ERROR | OCIVIP does not support IPv6. | Cause: OCIVIP resources do not support use with IPv6. Action: Please use IPv4. |
139062 | ERROR | $cmd is invalid. | Cause: $cmd is invalid. Action: Please contact us. |
139063 | ERROR | IPv$ipversion is unknown version. | Cause: The IP address version is incorrect. Action: Please use IPv4. |
139070 | ERROR | Failed to access the $IMDS_URL with the curl command, status code is $status_code. | Cause: Command curl to $IMDS_URL failed. Action: Make sure that curl to $IMDS_URL completes successfully. |
139071 | ERROR | Failed to decode JSON, error:\"$e\". | Cause: Failed to decode JSON. Action: Make sure that the result obtained from Instance Meta Data Service is in JSON format. |
139073 | ERROR | Cannot find the vnicId of $device. | Cause: The vnicId corresponding to $device could not be obtained. Action: Check if vnicId exists in the JSON record of Instance Meta Data Service. |
139074 | ERROR | Cannot find the subnetcidr of $device. | Cause: The subnetCidrBlock corresponding to $device could not be obtained. Action: Check if subnetcidr exists in the JSON record of Instance Meta Data Service. |
139075 | ERROR | Cannot find the vRouterIp of $device. | Cause: The virtualRouterIp corresponding to $device could not be obtained. Action: Check if virtualRouterIp exists in the JSON record of Instance Meta Data Service. |
139080 | ERROR | Cause: Failed to execute the oci command. Action: Configure the oci command to end successfully based on the status code and message. |
|
139081 | ERROR | Failed to get subnet id, error:\"$subnetid\" | Cause: Failed to obtain the subnet id. Action: Check if the information can be obtained correctly with the following command. The $vnicid corresponds to the VNIC-ID (OCID) of each VNIC assigned to the node. |
139100 | ERROR | IP address is not specified. | Cause: IP address is not specified. Action: Make sure the IP address is specified. |
139101 | ERROR | Device is not specified. | Cause: Device is not specified. Action: Make sure the device is specified. |
139105 | ERROR | Cannot bring OCIVIP resource $Tag in service on server $SysName. | Cause: Failed to restore the OCIVIP resource $Tag on $SysName. Action: Please refer to the log related to the restore process. |
139111 | ERROR | Cause: An IP resource corresponding to the IP address of the OCIVIP resource exists, but the device information and netmask information are inconsistent. Action: Change the configuration of the existing IP resource to have the same settings as the OCIVIP resource and create a dependency with the OCIVIP resource. |
|
139112 | ERROR | Cannot bring IP resource $iptag in service on server $SysName. | Cause: Failed to restore the IP resource $iptag corresponding to the OCIVIP resource on $SysName. Action: Review the configuration of the IP resource and make sure that restoring of the IP resource is successful. Then create a dependency with the OCIVIP resource. |
139200 | ERROR | Cause: A resource with the specified tag name already exists on the extended node. Action: Please use a different tag name or delete the resource with the target tag name of the extended node. |
|
139201 | ERROR | Cause: The resource specified as the extension source does not exist on the node. Action: Please specify the tag name correctly. |
|
139250 | ERROR | Failed to valuenetOnOCI, error: \"$msg\". | Cause: valuentOnOCI failed. Action: The full error message is in $stderr_log. Please take action based on it. |
140212 | ERROR | Daemon is running but the pidfile itself not found. | Cause: Daemon is started with quickCheck but pidfile doesn’t exist. Action: Attempt a local recovery. |
Post your comment on this topic.