For the health check, the Web service connects via MySQLdb to the MySQL instance running on a given port and issues a 'show databases' command. For more in-depth checking you can obviously run fancier SQL statements.
The code for my small tornado server is here. The default port it listens on is 31337.
Now on the HAProxy side I have a "listen" section for each collection of MySQL nodes that I want to load balance. Example:
listen mysql-m0 0.0.0.0:33306 mode tcp option httpchk GET /mysqlchk/?port=3306 balance roundrobin server db101 10.10.10.1:3306 check port 31337 inter 5000 rise 3 fall 3 server db201 10.10.10.2:3306 check port 31337 inter 5000 rise 3 fall 3 backupIn this case, HAProxy listens on port 33306 and load balances MySQL traffic between db101 and db201, with db101 being the primary node and db201 being the backup node (which means that traffic only goes to db101 unless it's considered down by the health check, in which case traffic is directed to db201). This scenario is especially useful when db101 and db201 are in a master-master replication setup, and you want traffic to hit only 1 of them at any given time. Note also that I could have had HAProxy listen on port 3306, but I preferred to have it listen and be contacted by the application on port 33306, in case I also wanted to run a MySQL server in port 3306 on the same server as HAProxy.
I specify how to call the HTTP check handler via "option httpchk GET /mysqlchk/?port=3306". I specify the port the handler listens on via the "port" option in the "server" line. In my case the port is 31337. So HAProxy will do a GET against http://10.10.10.1:31337/mysqlchk/?port=3306. If the result is an HTTP error code, the health check will be considered failed.
The other options "inter 5000 rise 3 fall 3" mean that the health check is issued by HAProxy every 5,000 ms, and that the health check needs to succeed 3 times ("rise 3") in order for the node to be considered up, and it needs to fail 3 times ("fall 3") in order for the node to be considered down.
I hasten to add that the master-master load balancing has its disadvantages. It did save my butt one Sunday morning when db101 went down hard (after all, it was an EC2 instance), and traffic was directed by HAProxy to db201 in a totally transparent fashion to the application.
But....I have also seen the situation where db201, as a slave to db101, lagged in its replication, and so when db101 was considered down and traffic was sent to db201, the state of the data was stale from an application point of view. I consider this disadvantage to weigh more than the automatic failover advantage, so I actually ended up taking db201 out of HAProxy. If db101 ever goes down hard again, I'll just manually point HAProxy to db201, after making sure the state of the data on db201 is what I expect.
So all this being said, I recommend the automated failover scenario only when load balance against a read-only farm of MySQL servers, which are all probably slaves of some master. In this case, although reads can also get out of sync, at least you won't attempt to do creates/updates/deletes against stale data.
The sad truth is that there is no good way of doing automated load balancing AND failover with MySQL without resorting to things such as DRBD which are not cloud-friendly. I am aware of Yves Trudeau's blog posts on "High availability for MySQL on Amazon EC2" but the setup he describes strikes me as experimental and I wouldn't trust it in a large-scale production setup.
In any case, I hope somebody will find the tornado handler I wrote useful for their own MySQL health checks, or actually any TCP-based health check they need to do within HAProxy.
9 comments:
Hi Grig!
You may like to test haproxy 1.4.9, which contains some new work by Hervé Commowick for MySQL health checks. It really authenticates as a declared user, so that's already much more reliable than the original health check method.
Willy
Thanks for the comment, Willy. I'll take a look.
I only knew DRBD is only for failover.
Is it possible to integrate DRBD with a load balancing solution?
Sebi
hi Grig,
Why not using Mysql Proxy for mysql load balancing rather than HAProxy?
MySQL Proxy is native mysql app, did you give it a try? Did you compare them?
Thanks
Serg
Serg -- I did try MySQL Proxy and I did use it in production when there was no other solution available, for example when splitting reads and writes. But it is not production quality, there are still bugs that can bite you at any time. HAProxy is solid and works fine for reads.
I don't think this works if your backup master is delayed by replication.
master1 receives thousands of queries.
master2 gets behind on one query.
master1 goes down.
traffic is redirected to master2
If an auto increment value has been used on master1, but not yet replicated to master2 and traffic is hitting master2, the potential is that front-end traffic will take this value and replication will break.
Phil B -- you're right, which is precisely why I don't really recommend this method any more (and I mentioned this in the post). HAProxy is good in front of read-only MySQL slaves, but I don't use it anymore in a master-master scenario.
Ah, if you master-master setup doesn't use the same auto-increment counters, ie one increments by odd numbers and the other by even numbers, then one going down won't matter. So this solutions is valid again??!??
Anonymous, We have haproxy in front of 2 production MySQL servers with master-master replication and have db1 with odd number autoincrements and db2 with evens. Works fine!
Post a Comment