If it's not in a Fabric fabfile, it's not deployable
This is a rule I'm trying to stick to. All deployments need to be automated. No ad-hoc, one-off deployments allowed. If you do allow them, they can quickly snowball into an unmaintainable, unreproducible mess.
Update (via njl): Only deploy files that are checked out of source control
So true. I should have included this when I wrote the post. Make sure your fabfiles, as well as all files you deploy remotely are indeed what you expect them to be. It's best to keep them in a source control repository, and to be disciplined about checking them in every time you update them.
Along the same lines: when deploying our Tornado-based Web services here at Evite, we make sure our continuous integration system (we use Hudson) builds the eggs and saves them in a central location, and we install those eggs with Fabric from the central location.
Common Fabric operations
I find myself using a small number of the Fabric functions. Basically I mainly use these three operations:
1) copy files to a remote server (with the 'put' function)
2) run a command as a regular user on a remote server (with the 'run' function)
3) run a command as root on a remote server (with the 'sudo' function)
I also make use sometimes of the 'sed' function, which allows me for example to comment or uncomment lines in an Nginx configuration file, so I can take servers in and out of the load balancer (for commenting lines out, you can also use the 'comment' function).
Here is a complete example of installing munin-node on a server:
# cat fab_munin.py
from fabric.api import *
env.user = 'root'
from environments import *
sudo("apt-get -y install munin-node chkconfig")
sudo("chkconfig munin-node on")
run('mkdir -p munin')
sudo('cp munin/munin-node.conf /etc/munin/')
sudo('cp munin/munin-node /etc/munin/plugin-conf.d')
sudo('service munin-node start')
sudo('service munin-node stop')
sudo('service munin-node restart')
The main function in this fabfile is 'install'. Inside it, I call 3 other functions which can be also called on their own: 'install_prereqs' installs the munin-node package via apt-get, 'copy_munin_files' copies various configuration files to the remote server under ~/munin, then sudo copies them to /etc/munin, and finally 'start' starts up the munin-node service. I find that this is a fairly common pattern for my deployments: install base packages, copy over configuration files, start or restart service.
Note that I'm importing a module called 'environments'. That's where I define and name lists of hosts that I want to target during the deployment. For example, your environments.py file could contain the following:
To install munin-node on all the staging app servers, you would run:
from fabric.api import *
"""App staging environment."""
env.hosts = [
"""DB staging environment."""
env.hosts = [
"""App production environment."""
env.hosts = [
"""DB production environment."""
env.hosts = [
fab -f fab_munin stgapp installIf you wanted to install munin-node on a server which is not part of any env.hosts lists defined in environments.py, you would just call:
fab -f fab_munin install...and the fab utility will ask you for a host name or IP on which to run the 'install' function. Simple and powerful.
Dealing with configurations for different environment types
A common issue I've seen is that different environment types (test, staging, production) need different configurations, both for services such as nginx or apache, and for my own applications. One solution I found is to prefix my environment names with their types (tst for testing, stg for staging and prd for production), and to add a sufix to the configuration files, for example nginx.conf.tst for the testing environment, nginx.conf.stg for staging and nginx.conf.prd for production.
Then, in my fabfiles, I automatically send the correct configuration file over, based on the environment I specify on the command line. For example, I have this function in my fab_nginx.py fabfile:
When I run:
host_prefix = env.host[:3]
env.config_file = 'nginx.conf.%s' % host_prefix
put('nginx/%(config_file)s' % env, '~')
sudo('mv ~/%(config_file)s /usr/local/nginx/conf/nginx.
sudo('service nginx restart')
fab -f fab_nginx tstapp deploy_config...this will deploy the configuration on the 2 hosts defined inside the tstapp list -- tstapp01 and tstapp02. The deploy_config function will capture 'tst' as the first 3 characters of the current host name, and it will then operate on the nginx.conf.tst file, sending it to the remote server as /usr/local/nginx/conf/nginx.conf.
I find that it is a good practice to not have local files called nginx.conf, because the risk of overwriting the wrong file in the wrong environment is increased. Instead, I keep 3 files around -- nginx.conf.tst, nginx.conf.stg and nginx.conf.prd -- and I copy them to remote servers accordingly.
Defining short functions inside a fabfile
My fabfiles are composed of short functions that can be called inside a longer function, or on their own. This gives me the flexibility to only deploy configuration files for example, or only install base packages, or only restart a service.
Automated deployments ensure repeatability
I may repeat myself here, but it is worth rehashing this point: in my view, the best feature of an automated deployment system is that it transforms your deployments from an ad-hoc, off-the-cuff procedure into a repeatable and maintainable process that both developer and operation teams can use (#devops anybody?). An added benefit is that you get for free a good documentation for installing your application and its requirements. Just copy and paste your fabfiles (or Puppet manifests) into a wiki page and there you have it.
Copying files to the server from my local machine has always felt dangerous to me. To your "If it's not in a Fabric fabfile, it's not deployable" you should add, "Only deploy files that are checked out of source control."
Thanks, njl! I updated my post accordingly.
We're leveraging fabric in the same way. We have a fabfile deploy script that will pull a Java .war file and deploy it out to our environments. We also try to make sure that before an application is deployed to a common development environment, the fabfile script needs to be in place. Loving Fabric!
I actually have a "deployment" user with read-only access to my version control system. My fab file does a checkout of the head to the server file system, then links and copies as needed.
It's great looking at other folks' fab files. Mine look so pristine I keep feeling like I'm missing something ;)
Nice article, I was surprised to see no mention of fabric config files or use of upload_template. Have you tried config files as an alternative to leveraging tasks to define hosts and/or using config files plus templates instead of *.tst, *.stg, or *.prd files.
If you've tried either approach, what were the downsides you encountered?
Nice article. The only thing I've found missing from Fabric that I'd love to see has been task dependencies ... like Paver or Rake. My solution has been to do what you've done (call each dependent function in the parent) but I use the "runs_once" decorator to ensure things don't get done repeatedly.
@njl it would be nice if we had a repository of sample fabfiles. I'll get in touch with @bitprophet, the fabric maintainer, about it.
Andrew -- I haven't tried either of the approaches you mention, but I'll look into it. Which one do you prefer?
Will -- you're right, it would be nice to be able to specify dependencies. The Fabric maintainer, @bitprophet, told me he has a long list of features that he'll add to it, so maybe it's already on his list.
With a little bit of putzing I found you can actually import the dependency tracker from Paver into fabfile.py files. I've written a short post about it on my blog here.
Hey Will! That's great. I'll experiment with it a bit. It's true, it's kinda strange not too many people are using paver. I heard good things about it, but like the other many people never actually used it....I'll check it out.
I use both in a fabfile for installing django on fresh ubuntu installations. For example, here is a task to configure nginx with fastcgi.
The context is pulled out of the environment file and then upload_template will render the template when uploading to the server.
The github repo is ubuntu2django, where you can see examples of the environment file (.fabricrc.sample) and the templates in the etc directory (for the task i mentioned the template is etc/nginx/sites-available/site).
I was wondering about maintaining individual environment files for test, staging and production and defining the config file (eg: nginx.conf) as a template, instead of your current approach of maintaining individual config files for each environment.
I was just curious if you had tried it and what issues you had encountered.
Andrew -- thanks a lot for the pointers to your fabfiles. As I mentioned in a comment above to njl, I think it would be great to create a repository of sample fabfiles (and associated templates, config. files etc.). I'll take a more in-depth look at your files, it's interesting how you use templates.
> Just copy and paste your fabfiles (or Puppet manifests) into a wiki page and there you have it.
Surely you should have a Fabric file for that?
Kami -- thanks, the more fabfile examples we have, the better off we are.
Paul -- hehe, you're right.
You should look into fabric roles instead of specifying hosts in your own functions.
I like the idea of using environments.py. However, I cannot figure out how to override the environments.py with fab -H hostname or just run on a single host without editing the environments.py file.
Do you have any ideas on how I could do this ?
Post a Comment