Monday, March 15, 2010

Automated deployments with Fabric - tips and tricks

Fabric has become my automated deployment tool of choice. You can find tons of good blog posts on it if you google it. I touched upon it in 2 blog posts, 'Automated deployments with Puppet and Fabric' and 'Automated deployment systems: push vs. pull'. Here are some more tips and tricks extracted from my recent work with Fabric.

If it's not in a Fabric fabfile, it's not deployable

This is a rule I'm trying to stick to. All deployments need to be automated. No ad-hoc, one-off deployments allowed. If you do allow them, they can quickly snowball into an unmaintainable, unreproducible mess.

Update (via njl): Only deploy files that are checked out of source control

So true. I should have included this when I wrote the post. Make sure your fabfiles, as well as all files you deploy remotely are indeed what you expect them to be. It's best to keep them in a source control repository, and to be disciplined about checking them in every time you update them.

Along the same lines: when deploying our Tornado-based Web services here at Evite, we make sure our continuous integration system (we use Hudson) builds the eggs and saves them in a central location, and we install those eggs with Fabric from the central location.

Common Fabric operations

I find myself using a small number of the Fabric functions. Basically I mainly use these three operations:

1) copy files to a remote server (with the 'put' function)
2) run a command as a regular user on a remote server (with the 'run' function)
3) run a command as root on a remote server (with the 'sudo' function)

I also make use sometimes of the 'sed' function, which allows me for example to comment or uncomment lines in an Nginx configuration file, so I can take servers in and out of the load balancer (for commenting lines out, you can also use the 'comment' function).

Complete example

Here is a complete example of installing munin-node on a server:
# cat fab_munin.py

from fabric.api import *

# Globals

env.project='MUNIN'
env.user = 'root'

# Environments
from environments import *

# Tasks

def install():
install_prereqs()
copy_munin_files()
start()

def install_prereqs():
sudo("apt-get -y install munin-node chkconfig")
sudo("chkconfig munin-node on")

def copy_munin_files():
run('mkdir -p munin')
put('munin/munin-node.conf', 'munin')
put('munin/munin-node', 'munin')
sudo('cp munin/munin-node.conf /etc/munin/')
sudo('cp munin/munin-node /etc/munin/plugin-conf.d')

def start():
sudo('service munin-node start')

def stop():
sudo('service munin-node stop')

def restart():
sudo('service munin-node restart')



The main function in this fabfile is 'install'. Inside it, I call 3 other functions which can be also called on their own: 'install_prereqs' installs the munin-node package via apt-get, 'copy_munin_files' copies various configuration files to the remote server under ~/munin, then sudo copies them to /etc/munin, and finally 'start' starts up the munin-node service. I find that this is a fairly common pattern for my deployments: install base packages, copy over configuration files, start or restart service.

Note that I'm importing a module called 'environments'. That's where I define and name lists of hosts that I want to target during the deployment. For example, your environments.py file could contain the following:

from fabric.api import *

def stgapp():
"""App staging environment."""
env.hosts = [
'stgapp01',
'stgapp02',
]

def stgdb():
"""DB staging environment."""
env.hosts = [
'stgdb01',
'stgdb02',
]

def prdapp():
"""App production environment."""
env.hosts = [
'prdapp01',
'prdapp02',
]

def prddb():
"""DB production environment."""
env.hosts = [
'prddb01',
'prddb02',
]

To install munin-node on all the staging app servers, you would run:
fab -f fab_munin stgapp install
If you wanted to install munin-node on a server which is not part of any env.hosts lists defined in environments.py, you would just call:
fab -f fab_munin install
...and the fab utility will ask you for a host name or IP on which to run the 'install' function. Simple and powerful.

Dealing with configurations for different environment types

A common issue I've seen is that different environment types (test, staging, production) need different configurations, both for services such as nginx or apache, and for my own applications. One solution I found is to prefix my environment names with their types (tst for testing, stg for staging and prd for production), and to add a sufix to the configuration files, for example nginx.conf.tst for the testing environment, nginx.conf.stg for staging and nginx.conf.prd for production.

Then, in my fabfiles, I automatically send the correct configuration file over, based on the environment I specify on the command line. For example, I have this function in my fab_nginx.py fabfile:

def deploy_config():
host_prefix = env.host[:3]
env.config_file = 'nginx.conf.%s' % host_prefix
put('nginx/%(config_file)s' % env, '~')
sudo('mv ~/%(config_file)s /usr/local/nginx/conf/nginx.
with settings(warn_only=True):
sudo('service nginx restart')

When I run:
fab -f fab_nginx tstapp deploy_config
...this will deploy the configuration on the 2 hosts defined inside the tstapp list -- tstapp01 and tstapp02. The deploy_config function will capture 'tst' as the first 3 characters of the current host name, and it will then operate on the nginx.conf.tst file, sending it to the remote server as /usr/local/nginx/conf/nginx.conf.

I find that it is a good practice to not have local files called nginx.conf, because the risk of overwriting the wrong file in the wrong environment is increased. Instead, I keep 3 files around -- nginx.conf.tst, nginx.conf.stg and nginx.conf.prd -- and I copy them to remote servers accordingly.

Defining short functions inside a fabfile

My fabfiles are composed of short functions that can be called inside a longer function, or on their own. This gives me the flexibility to only deploy configuration files for example, or only install base packages, or only restart a service.

Automated deployments ensure repeatability

I may repeat myself here, but it is worth rehashing this point: in my view, the best feature of an automated deployment system is that it transforms your deployments from an ad-hoc, off-the-cuff procedure into a repeatable and maintainable process that both developer and operation teams can use (#devops anybody?). An added benefit is that you get for free a good documentation for installing your application and its requirements. Just copy and paste your fabfiles (or Puppet manifests) into a wiki page and there you have it.

18 comments:

njl said...

Copying files to the server from my local machine has always felt dangerous to me. To your "If it's not in a Fabric fabfile, it's not deployable" you should add, "Only deploy files that are checked out of source control."

Grig Gheorghiu said...

Thanks, njl! I updated my post accordingly.

jijnes said...

We're leveraging fabric in the same way. We have a fabfile deploy script that will pull a Java .war file and deploy it out to our environments. We also try to make sure that before an application is deployed to a common development environment, the fabfile script needs to be in place. Loving Fabric!

njl said...

I actually have a "deployment" user with read-only access to my version control system. My fab file does a checkout of the head to the server file system, then links and copies as needed.

It's great looking at other folks' fab files. Mine look so pristine I keep feeling like I'm missing something ;)

Andrew Watts said...

Nice article, I was surprised to see no mention of fabric config files or use of upload_template. Have you tried config files as an alternative to leveraging tasks to define hosts and/or using config files plus templates instead of *.tst, *.stg, or *.prd files.

If you've tried either approach, what were the downsides you encountered?

Will said...

Nice article. The only thing I've found missing from Fabric that I'd love to see has been task dependencies ... like Paver or Rake. My solution has been to do what you've done (call each dependent function in the parent) but I use the "runs_once" decorator to ensure things don't get done repeatedly.

Grig Gheorghiu said...

@njl it would be nice if we had a repository of sample fabfiles. I'll get in touch with @bitprophet, the fabric maintainer, about it.

Grig Gheorghiu said...

Andrew -- I haven't tried either of the approaches you mention, but I'll look into it. Which one do you prefer?

Grig Gheorghiu said...

Will -- you're right, it would be nice to be able to specify dependencies. The Fabric maintainer, @bitprophet, told me he has a long list of features that he'll add to it, so maybe it's already on his list.

Will said...

With a little bit of putzing I found you can actually import the dependency tracker from Paver into fabfile.py files. I've written a short post about it on my blog here.

Grig Gheorghiu said...

Hey Will! That's great. I'll experiment with it a bit. It's true, it's kinda strange not too many people are using paver. I heard good things about it, but like the other many people never actually used it....I'll check it out.

Andrew Watts said...

Hi Grig,

I use both in a fabfile for installing django on fresh ubuntu installations. For example, here is a task to configure nginx with fastcgi.

http://github.com/andrewwatts/.../fabfile.py#L160

The context is pulled out of the environment file and then upload_template will render the template when uploading to the server.

The github repo is ubuntu2django, where you can see examples of the environment file (.fabricrc.sample) and the templates in the etc directory (for the task i mentioned the template is etc/nginx/sites-available/site).

I was wondering about maintaining individual environment files for test, staging and production and defining the config file (eg: nginx.conf) as a template, instead of your current approach of maintaining individual config files for each environment.

I was just curious if you had tried it and what issues you had encountered.

Grig Gheorghiu said...

Andrew -- thanks a lot for the pointers to your fabfiles. As I mentioned in a comment above to njl, I think it would be great to create a repository of sample fabfiles (and associated templates, config. files etc.). I'll take a more in-depth look at your files, it's interesting how you use templates.

Grig

Kami said...

Yep, Fabric is pretty nice and useful.

My fabfile is located on http://github.com/Kami/django-deployment-script/blob/master/fabfile.py

I know it's far from perfect, but it serves me pretty well.

I also use two more fabfiles on pretty much regular basis.

One of those has among other things ability to apply specified patches after the deployment and can automatically put revision: git_revision_id or revision: latest_git_tag_name text after the deployment in the specified HTML / template file.

Other one has the ability to deploy the Django application on nginx or Tornado.

I need to take some time in the future, clean up all the fabfiles I use, make a single a bit more generic one and put it on github :)

pauldwaite said...

> Just copy and paste your fabfiles (or Puppet manifests) into a wiki page and there you have it.

Surely you should have a Fabric file for that?

Grig Gheorghiu said...

Kami -- thanks, the more fabfile examples we have, the better off we are.

Grig Gheorghiu said...

Paul -- hehe, you're right.

carinadigital said...

You should look into fabric roles instead of specifying hosts in your own functions.