Friday, May 31, 2013

Some gotchas around keepalived and iproute2 (part 1)

I should have written this blog post a while ago, while these things were still fresh on my mind. Still, better late than never.

Scenario: 2 bare-metal servers with 6 network ports each, to serve as our HAProxy load balancers in an active/failover configuration based on keepalived (I described how we integrated this with Chef in my previous post).

The architecture we have for the load balancers is as follows

  • 1 network interface (virtual and bonded, see below) is on a 'front-end' VLAN which gets the incoming traffic hitting HAProxy
  • 1 network interface is on a 'back-end' VLAN where the actual servers behind HAProxy live
  • 1 network interface is on an 'ops' VLAN which we want to use for accessing the HAProxy server for monitoring purposes

We (and by the way, when I say we, I mean mostly my colleagues Jeff Roberts and Zmer Andranigian) used Open vSwitch to create a virtual bridge interface and bond 2 physical interfaces on this bridge for each of the 'front-end' and 'back-end' interfaces.

To install Open vSwitch on Ubuntu, use:

# apt-get install openvswitch-switch openvswitch-controller

To create a bridge:

# ovs-vsctl add-br frontend_if

To create a bonded interface with 2 physical NICs (eth0 and eth1) on the frontend_if bridge created above:

# ovs-vsctl add-bond frontend_if frontend_bond eth0 eth1 lacp=active other_config:lacp-time=slow bond_mode=balance-tcp

We did the same for the 'back-end' interface by creating a bridge and bonding eth2 and eth3. We also configured the 'ops' interface as a regular network interface on eth4. To assign IP addresses to frontend_if, backend_if and eth4, we edited /etc/network/interfaces and added stanzas similar to:

auto eth0
iface eth0 inet static
address 0.0.0.0

auto eth1
iface eth1 inet static
address 0.0.0.0



auto frontend_if
iface frontend_if  inet static
        address 172.3.15.36
        netmask 255.255.255.0
        network 172.3.15.0
        broadcast 172.3.15.255
        gateway 172.3.15.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 172.3.10.4 172.3.10.8
        dns-search prod-vip.mydomain.com


auto eth2
iface eth2 inet static
address 0.0.0.0

auto eth3
iface eth3 inet static
address 0.0.0.0


auto backend_if
iface backend_if  inet static
        address 172.3.50.36
        netmask 255.255.255.0
        network 172.3.50.0
        broadcast 172.3.50.255
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 172.3.10.4 172.3.10.8
        dns-search prod.mydomain.com


At this point, we wanted to be able to ssh from a remote location into the HAProxy box using any of the 3 IP addresses associated with frontend_if, backend_if, and eth4. The problem was that with the regular routing rules in Linux, there's one default gateway, which in our case was on the same VLAN with frontend_if (172.3.15.1).

The solution was to install and configure the iproute2 package. This allows you to have multiple default gateways, one per interface that you want to configure this way (this blog post on iproute2 commands proved to be very useful).

To configure a default gateway for each of the 2 interfaces we defined above (frontend_if and backend_if), we added the following commands to /etc/rc.local  so that they can be run each time the box gets rebooted:


echo "1 admin" > /etc/iproute2/rt_tables
ip route add 172.3.50.0/24 dev backend_if src 172.3.50.36 table admin
ip route add default via 172.3.50.1 dev backend_if table admin
ip rule add from 172.3.15.36/32 table admin
ip rule add to 172.3.15.36/32 table admin

echo "2 admin2" >> /etc/iproute2/rt_tables
ip route add 172.3.15.0/24 dev frontend_if src 172.3.15.36 table admin2
ip route add default via 172.30.15.1 dev frontend_if table admin2
ip rule add from 172.3.15.36/32 table admin2
ip rule add to 172.3.15.36/32 table admin2


This was working great, but there was another aspect to this setup: we needed to get keepalived working between the 2 HAProxy boxes. Running keepalived means there is a new floating IP, which is a virtual IP address maintained by the keepalived process. In our case, this floating IP (172.3.15.38) was attached to the frontend_if interface, which means we had to add another iproute2 stanza:


echo "3 admin3" >> /etc/iproute2/rt_tables
ip route add 172.3.15.0/24 dev frontend_if src 172.3.15.38 table admin3
ip route add default via 172.30.15.1 dev frontend_if table admin3
ip rule add from 172.3.15.38/32 table admin3
ip rule add to 172.3.15.38/32 table admin3

I'll stop here for now. Stay tuned for part 2, where you can read about our adventures trying to get keepalived to work as we wanted it to. Hint: it involved getting rid of iproute2 policies.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...