Tuesday, August 31, 2010

Poor man's MySQL disaster recovery in EC2 using EBS volumes

First of all, I want to emphasize that this is NOT a disaster recovery strategy I recommend. However, in a pinch, it might save your ass. Here's the scenario I have:

  • 2 m1.large instances running Ubuntu 10.04 64-bit and the Percona XtraDB MySQL builds (for the record, the exact version I'm using is "Server version: 5.1.43-60.jaunty.11-log (Percona SQL Server (GPL), XtraDB 9.1, Revision 60")
  • I'll call the 2 servers db101 and db201
  • each server is running 2 MySQL instances -- I'll call them m1 and m2
  • instance m1 on db101 and instance m1 on db201 are set up in master-master replication (and similar for instance m2)
  • the DATADIR for m1 is /var/lib/mysql/m1 on each server; that file system is mounted from an EBS volume (and similar for m2)
  • the configuration files for m1 are in /etc/mysql1 on each server -- that directory was initially a copy of the Ubuntu /etc/mysql configuration directory, which I then customized (and similar for m2)
  • the init.d script for m1 is in /etc/init.d/mysql1 (similar for m2)
What I tested:
  • I took a snapshot of each of the 2 EBS volumes associated with each of the DB servers (4 snapshots in all)
  • I terminated the 2 m1.large instances
  • I launched 2 m1.xlarge instances and installed the same Percona distribution (this was done via a Chef recipe at instance launch time); I'll call the 2 new instances xdb101 and xdb102
  • I pushed the configuration files for m1 and m2, as well as the init.d scripts (this was done via fabric)
  • I created new volumes from the EBS snapshots (note that these volumes can be created in any EC2 availability zone)
  • On xdb101, I attached the 2 volumes created from the EBS snapshots on db101; I specified /dev/sdm and /dev/sdn as the device names (similar on xdb201)
  • On xdb101, I created /var/lib/mysql/m1 and mounted /dev/sdm there; I also created /var/lib/mysql/m2 and mounted /dev/sdn there (similar on xdb201)
  • At this point, the DATADIR directories for both m1 and m2 are populated with 'live files' from the moment when I took the EBS snapshot
  • I made sure syslog-ng accepts UDP traffic from localhost (by default it doesn't); this is because by default in Ubuntu mysql log messages are sent to syslog --> to do this, I ensured that "udp(ip(127.0.0.1) port(514));" appears in the "source s_all" entry in /etc/syslog-ng/syslog-ng.conf
At this point, I started up the first MySQL instance on xdb101 via "/etc/init.d/mysql1 start". This script most likely will show [fail] on the console, because MySQL will not start up normally. If you look in /var/log/syslog, you'll see entries similar to:

Aug 31 18:03:21 xdb101 mysqld: 100831 18:03:21 [Note] Plugin 'FEDERATED' is disabled.
Aug 31 18:03:21 xdb101 mysqld: InnoDB: The InnoDB memory heap is disabled
Aug 31 18:03:21 xdb101 mysqld: InnoDB: Mutexes and rw_locks use GCC atomic builtins
Aug 31 18:03:22 xdb101 mysqld: 100831 18:03:22  InnoDB: highest supported file format is Barracuda.
Aug 31 18:03:23 xdb101 mysqld: InnoDB: The log sequence number in ibdata files does not match
Aug 31 18:03:23 xdb101 mysqld: InnoDB: the log sequence number in the ib_logfiles!
Aug 31 18:03:23 xdb101 mysqld: 100831 18:03:23  InnoDB: Database was not shut down normally!
Aug 31 18:03:23 xdb101 mysqld: InnoDB: Starting crash recovery.

If you wait a bit longer (and if you're lucky), you'll see entries similar to:

Aug 31 18:04:20 xdb101 mysqld: InnoDB: Restoring possible half-written data pages from the doublewrite
Aug 31 18:04:20 xdb101 mysqld: InnoDB: buffer...
Aug 31 18:04:24 xdb101 mysqld: InnoDB: In a MySQL replication slave the last master binlog file
Aug 31 18:04:24 xdb101 mysqld: InnoDB: position 0 15200672, file name mysql-bin.000015
Aug 31 18:04:24 xdb101 mysqld: InnoDB: and relay log file
Aug 31 18:04:24 xdb101 mysqld: InnoDB: position 0 15200817, file name ./mysqld-relay-bin.000042
Aug 31 18:04:24 xdb101 mysqld: InnoDB: Last MySQL binlog file position 0 17490532, file name /var/lib/mysql/m1/mysql-bin.000002
Aug 31 18:04:24 xdb101 mysqld: 100831 18:04:24 InnoDB Plugin 1.0.6-9.1 started; log sequence number 1844705956
Aug 31 18:04:24 xdb101 mysqld: 100831 18:04:24 [Note] Recovering after a crash using /var/lib/mysql/m1/mysql-bin
Aug 31 18:04:24 xdb101 mysqld: 100831 18:04:24 [Note] Starting crash recovery...
Aug 31 18:04:24 xdb101 mysqld: 100831 18:04:24 [Note] Crash recovery finished.

At this point, you can do "/etc/init.d/mysql1 restart" just to make sure that both stopping and starting that instance work as expected. Repeat for instance m2, and also repeat on server xdb201.

So....IF you are lucky and the InnoDB crash recovery process did its job, you should have 2 functional MySQL instances one each of xdb101 and xdb201. I tested this with several pairs of servers and it worked for me every time, but I hasten to say that YMMV, so DO NOT bet on this as your disaster recovery strategy!

At this point I still had to re-establish the master-master replication between m1 on xdb101 and m1 on xdb201 (and similar for m2). 

When I initially set up this replication between the original m1.large servers, I used something like this on both db101 and db201:

CHANGE MASTER TO MASTER_HOST='master1', MASTER_PORT=3306, MASTER_USER='masteruser', MASTER_PASSWORD='xxxxxx';"

The trick for me is that master1 points to db201 in db101's /etc/hosts, and vice-versa.

On the newly created xdb101 and xdb201, there are no entries for master1 in /etc/hosts, so replication is broken. Which is a good thing initially, because you want to have the MySQL instances on each server be brought back up without throwing replication into the mix.

Once I added an entry for master1 in xdb101's /etc/hosts pointing to xdb201, and did the same on xdb201, I did a 'stop slave; start slave; show slave status\G' on the m1 instance on each server. In all cases I tested, one of the slaves was showing everything OK, while the other one was complaining about   not being able to read from the master's log file. This was fairly simply to fix. Let's assume xdb101 is the one complaining. I did the following:
  • on xdb201, I ran 'show master status\G' and noted the file name (for example "mysql-bin.000017") and the file position (for example 106)
  • on xdb101, I ran the following command: "stop slave; change master to master_log_file='mysql-bin.000017', master_log_pos=106; start slave;"
  • not a 'show slave status\G' on xdb101 should show everything back to normal
Some lessons:
  • take periodic snapshots of your EBS volumes (at least 1/day)
  • for a true disaster recovery strategy, use at least mysqldump to dump your DB to disk periodically, or something more advanced such as Percona XtraBackup; I recommend dumping the DB to an EBS volume and taking periodic snapshots of that volume
  • the procedure I detailed above is handy when you want to grow your instance 'vertically' -- for example I went from m1.large to m1.xlarge

1 comment:

Windows tech support said...

In any case, when we bring up a database using an EBS volume created from an EBS snapshot of an active database, it can take up to 45 minutes recovering before it lets normal clients connect. This is too long for us so we’re trying a new approach.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...