The following test scenarios have been included to guide you as you get started evaluating SIOS Protection Suite for Linux. Before beginning these tests, make sure the data replication resources are in the mirroring state.
Manual Switchover of the MySQL Hierarchy to Secondary Server
Procedure:
- From the LifeKeeper GUI, right click on the MySQL resource on the Secondary Server (LINUXSECONDARY) and choose IN SERVICE.
- Click “In Service” in the window that pops up
Expected Result:
- Beginning with the MySQL resource, all resources will be removed from service on the Active Server (LINUXPRIMARY).
- Beginning with the dependent resources (IP and Replicated Volume), all resources will be brought in service on LINUXSECONDARY.
- During this process, the direction of the mirror reversed. Data is now transmitting from LINUXSECONDARY -> LINUXPRIMARY
- At this point, all resources are now active on LINUXSECONDARY.
Tests/Verification:
- Using the LifeKeeper GUI, verify that the MySQL and dependent resources are active on LINUXSECONDARY.
- Using the LifeKeeper GUI, verify the mirror is now reversed and mirroring in the opposite direction. Right click on the “datarep-mysql” resource and select Properties
- Run “ifconfig –a” on LINUXSECONDARY to validate that the IP Address 192.168.197.151 is active on LINUXSECONDARY
- Run “df –h” to verify that the /var/lib/mysql replicated filesystem is mounted as an “md” device (example: /dev/md0”) on LINUXSECONDARY
- Verify the MySQL services are running on LINUXSECONDARY by running “ps –ef | grep –i mysql”
- On LINUXSECONDARY run the following command to verify client connectivity to the MySQL database:
º # mysql –S /var/lib/mysql/mysql.sock –u root –p
º (enter password “SteelEye”)
Manual Switchover of the MySQL Hierarchy back to Primary Server
Procedure:
- From the LifeKeeper GUI, right click on the MySQL resource on the Primary Server (LINUXPRIMARY) and choose IN SERVICE.
- Click “In Service” in the window that pops up
Expected Result:
- Beginning with the MySQL resource, all resources will be removed from service on the Active Server (LINUXSECONDARY).
- Beginning with the dependent resources (IP and Replicated Volume), all resources will be brought in service on LINUXPRIMARY
- During this process, the direction of the mirror reversed. Data is now transmitting from LINUXPRIMARY -> LINUXSECONDARY
Tests/Verification:
- Using the LifeKeeper GUI, verify that the MySQL and dependent resources are active on LINUXPRIMARY.
- Using the LifeKeeper GUI, verify the mirror is now reversed and mirroring in the opposite direction. Right click on the “datarep-mysql” resource and select Properties
- Run “ifconfig –a” on LINUXPRIMARY to validate that the IP Address 192.168.197.151 is active on LINUXPRIMARY
- Run “df –h” to verify that the /var/lib/mysql replicated filesystem is mounted as an “md” device (example: /dev/md0”) on LINUXPRIMARY
- Verify the MySQL services are running on LINUXPRIMARY by running “ps –ef | grep –i mysql”
- On LINUXPRIMARY run the following command to verify client connectivity to the MySQL database:
º # mysql –S /var/lib/mysql/mysql.sock –u root –p
º (enter password “SteelEye”)
Simulate a network failure on the Primary Server by failing the IP resource
If you perform this test and have only one communications path configured, your system will enter a split-brain scenario as described in the LifeKeeper Administration Guide found here. Refer to this document for more information or contact SIOS presales technical support for assistance in resolving this condition.
Procedure:
- On LINUXPRIMARY, pull the network cable attached to the NIC that the virtual IP address is configured on
Expected Result:
- The IP Resource should fail first.
- The entire hierarchy should failover to LINUXSECONDARY
Tests/Verification:
- Check the LifeKeeper Log to verify the IP resource failed – “/opt/LifeKeeper/bin/lk_log log”
- Using the LifeKeeper GUI, verify the MySQL and Apache resource hierarchies fail over successfully to LINUXSECONDARY
Hard failover of the resource from the Secondary Server back to the Primary Server
Procedure:
- Pull the power cord on LINUXSECONDARY, as this is the server with all resources currently In Service.
Expected Result:
- After failure has been detected, beginning with the dependent resources (IP and Volume), all resources will be brought in service on LINUXPRIMARY.
Tests/Verification:
- Using the LifeKeeper GUI, verify the mirror has reversed and is in a Resync Pending state waiting for LINUXSECONDARY to come back on line.
- Verify the Apache and MySQL Server services are running on LINUXPRIMARY.
- Verify that the client can still connect to the Webserver and database running on LINUXPRIMARY.
- Verify you can write data to the replicated volume, /var/lib/mysql on LINUXPRIMARY.
Bring Failed Server back on line
Procedure:
- Plug the power cord back into LINUXSECONDARY and boot it up.
Expected Result:
- Using the LifeKeeper GUI, verify that LINUXSECONDARY is coming back up and has become the Standby Server.
Tests/Verification:
- Verify the mirror performs a quick partial resync and moves to the Mirroring state
- Verify the Apache and MySQL Hierarchy are in service on LINUXPRIMARY and standby on LINUXSECONDARY.
Verify Local Recovery of MySQL Server
Procedure:
- Kill the MySQL processes via the command line:
- # ps –ef | grep sql
- # killall mysqld mysqld_safe
- run “ps –ef | grep sql once again to verify that the processes no longer exist
Expected Result: (Assumes Local Recovery for MySQL resource is set to YES)
- The MySQL Server service should stop.
- The MySQL quickcheck process will automatically restart the MySQL Server Service when it runs periodically.
- No failure of MySQL should occur.
Tests/Verification:
- Execute “ps –ef | grep sql” once again to verify that the mysql processes have been restored locally on LINUXPRIMARY.
- Verify connectivity to the MySQL database by running:
- # mysql –S /var/lib/mysql/mysql.sock –u root –p
- # (Enter password “SteelEye”)
- If you inspect the LifeKeeper logs, you will see information indicating that LifeKeeper dectected the failure of the MySQL service and recovered it locally. Run /opt/LifeKeeper/bin/lk_log log for more information.
このトピックへフィードバック