Thursday, May 06, 2010

RabbitMQ clustering in Ubuntu

I've been banging my head against various RabbitMQ-related walls in the last couple of days, so I wanted to quickly jot down how I got clustering to work on Ubuntu boxes, before I forget.

Scenario A
  • 2 Ubuntu 9.04 64-bit servers (app01 and app02)
  • 1 Ubuntu 9.10 64-bit server (app03)
  • all servers are part of an internal DNS zone
My initial mistake here was to install the rabbitmq-server package via "apt-get install". On 9.04, I got version 1.5.4 or rabbitmq-server, while on 9.10 I got 1.6. As it turns out, the database versions (rabbitmq uses the Mnesia database) are incompatible in terms of clustering between these 2 versions.

When I tried to join the server running Ubuntu 9.10 to the cluster, it complained with:

error: {schema_integrity_check_failed,
           {aborted,{no_exists,rabbit_user,version}}}

So....I removed rabbitmq-server via "apt-get remove" on all 3 servers. I also removed /var/lib/rabbitmq (which contains the database directory mnesia) and /var/log/rabbitmq (which contains the logs).

I then downloaded the rabbitmq-server_1.7.2-1_all.deb package and installed it, but only after also installing erlang. So:

# apt-get install erlang
# dpkg -i rabbitmq-server_1.7.2-1_all.deb

The install process automatically starts up the rabbitmq server. You can see its status by running:

# rabbitmqctl status
Status of node 'rabbit@app02' ...
[{running_applications,[{rabbit,"RabbitMQ","1.7.2"},
                        {mnesia,"MNESIA  CXC 138 12","4.4.7"},
                        {os_mon,"CPO  CXC 138 46","2.1.8"},
                        {sasl,"SASL  CXC 138 11","2.1.5.4"},
                        {stdlib,"ERTS  CXC 138 10","1.15.5"},
                        {kernel,"ERTS  CXC 138 10","2.12.5"}]},
 {nodes,['rabbit@app02']},
 {running_nodes,['rabbit@app02']}]
...done.

Very important step: make sure the 3 servers have the same .erlang.cookie file. By default, every new installation creates a file containing a unique cookie in /var/lib/rabbitmq/.erlang.cookie. For clustering, the cookie needs to be the same, otherwise the servers are not allowed to access the shared cluster state on each other.

Here's what I did:
  • I stopped rabbitmq-server on app02 and app03 via '/etc/init.d/rabbitmq-server stop'
  • I copied /var/lib/rabbitmq/.erlang.cookie from app01 to app02 and app03
  • I removed /var/lib/rabbitmq/mnesia on app02 and app03 (you need to do this when you change the cookie)
  • finally I restarted rabbitmq-server on app02 and app03 via '/etc/init.d/rabbitmq-server start'

Now comes the actual clustering setup part. In my case, I wanted all nodes to be disk nodes as opposed to RAM nodes in terms of clustering, for durability purposes.

Here's what I did on app02, following the RabbitMQ clustering doc:

# rabbitmqctl stop_app
Stopping node 'rabbit@app02' ...
...done.

# rabbitmqctl reset
Resetting node 'rabbit@app02' ...
...done.

# rabbitmqctl cluster rabbit@app01 rabbit@app02
Clustering node 'rabbit@app02' with ['rabbit@app01',
                                         'rabbit@app02'] ...
...done.

# rabbitmqctl start_app
Starting node 'rabbit@app02' ...
...done.

# rabbitmqctl status
Status of node 'rabbit@app02' ...
[{running_applications,[{rabbit,"RabbitMQ","1.7.2"},
                        {mnesia,"MNESIA  CXC 138 12","4.4.7"},
                        {os_mon,"CPO  CXC 138 46","2.1.8"},
                        {sasl,"SASL  CXC 138 11","2.1.5.4"},
                        {stdlib,"ERTS  CXC 138 10","1.15.5"},
                        {kernel,"ERTS  CXC 138 10","2.12.5"}]},
 {nodes,['rabbit@app02','rabbit@app01']},
 {running_nodes,['rabbit@app01','rabbit@app02']}]

You need to make sure at this point that you can see both nodes app01 and app02 listed in the nodes list and also in the running_nodes list. If you forgot the cookie step, the steps above will still work, but you'll only see app02 listed under nodes and running_nodes. Only if you inspect the log file /var/log/rabbitmq/rabbit.log on app01 will you see errors such as

=ERROR REPORT==== 6-May-2010::11:12:03 ===
** Connection attempt from disallowed node rabbit@app02 **

Assuming you see both nodes app01 and app02 in the nodes list, you can proceed with the same steps on app03. At this point, you should see something similar to this if you run 'rabbitmqctl status' on each node:

# rabbitmqctl status
Status of node 'rabbit@app03' ...
[{running_applications,[{rabbit,"RabbitMQ","1.7.2"},
{mnesia,"MNESIA CXC 138 12","4.4.10"},
{os_mon,"CPO CXC 138 46","2.2.2"},
{sasl,"SASL CXC 138 11","2.1.6"},
{stdlib,"ERTS CXC 138 10","1.16.2"},
{kernel,"ERTS CXC 138 10","2.13.2"}]},
{nodes,['rabbit@app03','rabbit@app02','rabbit@app01']},
{running_nodes,['rabbit@app01','rabbit@app02','rabbit@app03']}]
...done.

Now you can shut down and restart 2 of these 3 nodes, and the cluster state will be preserved. I am still experimenting with what happens if all 3 nodes go down. I tried it and I got a timeout when trying to start up rabbitmq-server again on one of the nodes. I could only get it to start by removing the database directory /var/lib/rabbitmq/mnesia -- but this kinda defeats the purpose of persisting the state of the cluster on disk.

Now for the 2nd scenario, when the servers you want to cluster are not running DNS (for example if they're EC2 instances).

Scenario B
  • 2 Ubuntu 9.04 servers running as EC2 instances (ec2app01 and ec2app02)
  • no internal DNS
Let me cut to the chase here and say that the proper solution to this scenario is to have an internal DNS zone setup. The fact is that rabbitmq-server nodes are communicating amongst themselves using DNS, so if you choose not to use DNS, and instead use entries in /etc/hosts or even IP addresses, you need to resort to ugly hacks involving editing of the rabbitmq-server, rabbitmq-multi and rabbitmqctl scripts in /usr/lib/rabbitmq/bin and specifying -name instead of -sname everywhere you find that option in those scripts. And even then things might not work, and things will break when you will upgrade rabbitmq-server. See this discussion and this blog post for more details on this issue. In a nutshell, the RabbitMQ clustering mechanism expects that the FQDN of each node can be resolved via DNS on all other nodes in the cluster.

So...my solution was to deploy an internal DNS server running bind9 in EC2 -- which is something that sooner or later you need to do if you run more than a handful of EC2 instances.


Future work

This post is strictly about RabbitMQ clustering setup. There are many other facets of deploying a highly-available RabbitMQ system.

For example, I am also experimenting with putting an HAProxy load balancer in front of the my RabbitMQ servers. I want all clients to communicate with a single entry point in my RabbitMQ cluster, and I want that entry point to load balance across all servers in the cluster. I'll report on that setup later.

Another point that I need to figure out is how to deal with failures at the individual node level, given that RabbitMQ queues are not distributed across all nodes in a cluster, but instead they stay on the node where they were initially created. See this discussion for more details on this issue.

If you have deployed a RabbitMQ cluster and worked through some of these issues already, I'd love to hear about your experiences, so please leave a comment.

6 comments:

Anonymous said...

In case you haven't already got your hands on it, the RabbitMQ support folks have recently published a guide to one kind of HA setup at

http://www.rabbitmq.com/pacemaker.html

Grig Gheorghiu said...

I did see the solution with DRBD and pacemaker, but I would like to avoid heartbeat-type HA solutions because they are not portable to EC2.

Chris said...

Hi Grig,

I appreciate your post about RabbitMQ and clustering, thanks. At the end you stated:

"I am also experimenting with putting an HAProxy load balancer in front of the my RabbitMQ servers. I want all clients to communicate with a single entry point in my RabbitMQ cluster, and I want that entry point to load balance across all servers in the cluster. I'll report on that setup later."

Did you end up going this route? I'm interested on any additional thoughts you have as this seems like a good solution for what I'm doing.

Also, you stated:
"Another point that I need to figure out is how to deal with failures at the individual node level, given that RabbitMQ queues are not distributed across all nodes in a cluster, but instead they stay on the node where they were initially created."

I'm also struggling with how to deal with this problem. Any additional thoughts or experience since you posted?

Thanks,
Chris

Grig Gheorghiu said...

Hi Chris

Unfortunately I haven't made any progress with RabbitMQ clustering. We actually ended up implementing our own queuing system, and we just do an old-fashioned rsync at the file level for disaster recovery purposes.

GodLikeMouse said...

Hey Chris, here's a real easy to follow step by step guild on clustering Rabbit-MQ. You can pretty much just go line by line and have clustering working in a few minutes.

http://www.godlikemouse.com/2010/12/14/how-to-cluster-rabbit-mq/

Hope this helps.

checkspace said...

You have saved me tons of time, Thanks heaps for this post mate. Cheers

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...