But this is not a post about Puppet -- although I promise I'll blog about that too. This is a post on how to get to the point of using Puppet in an EC2 environment, by automatically configuring EC2 instances as Puppet clients once they're launched.
While the mechanism I'll describe can be achieved by other means, I chose to use the Ubuntu EC2 AMIs provided by alestic. As a parenthesis, if you're thinking about using Ubuntu in EC2, do yourself a favor and read Eric Hammond's blog (which can be found at alestic.com) He has a huge number of amazingly detailed posts related to this topic, and they're all worth your while to read.
Unsurprisingly, I chose a mechanism provided by the alestic AMIs to bootstrap my EC2 instances -- specifically, passing user-data scripts that will be automatically run on the first boot of the instance. You can obviously also bake this into your own custom AMI, but the alestic AMIs already have this hook baked in, which I LIKE (picture Borat's voice). What's more, Eric kindly provides another way to easily run custom scripts within the main user-data script -- I'm referring to his runurl script, detailed in this blog post. Basically you point runurl at a URL that contains the location of another script that you wrote, and runurl will download and run that script. You can also pass parameters to runurl, which will in turn be passed to your script.
Enough verbiage, let's see some examples.
Here is my user-data file, whose file name I am passing along as a parameter when launching my EC2 instances:
cat <<EOL > /etc/hosts
127.0.0.1 localhost.localdomain localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
wget -qO/usr/bin/runurl run.alestic.com/runurl
chmod 755 /usr/bin/runurl
The first thing I do in this script is to add an entry to the /etc/hosts file pointing at the IP address of my puppetmaster server. You can obviously do this with an internal DNS server too, but I've chosen not to maintain my own internal DNS servers in EC2 for now.
My script then retrieves the runurl utility from alestic.com, puts it in /usr/bin and chmod's it to 755. Then the script uses runurl and points it at various other scripts I wrote, all hosted on an internal web server.
For example, the contents of upgrade/apt are:
apt-get -y upgrade
apt-get -y autoremove
For ssh customizations, my scripts downloads a specific .ssh/authorized_keys file, so I can ssh to the new instance using certain ssh keys.
To install and customize vim, I have customize/vim:
apt-get -y install vim
wget -qO/root/.vimrc http://ec2web.mycompany.com/configs/os/.vimrc
echo 'alias vi=vim' >> /root/.bashrc
...where .vimrc is a customized file that I keep under the document root of the same web server where I keep my scripts.
Finally, install/puppet looks like this:
apt-get -y install puppet
wget -qO/etc/puppet/puppetd.conf http://ec2web.mycompany.com/configs/puppet/puppetd.conf
Here I am installing puppet via apt-get, then I'm downloading a custom puppetd.conf configuration, which points at puppetmaster as its server name (instead of the default, which is puppet). Finally, I restart puppet so that the new configuration takes effect.
Note that I want to keep these scripts to the bare minimum that allows me to:
1) ssh into the instance in case anything goes wrong
2) install and configure puppet so the instance can talk to the puppetmaster
The actual package and application installations and customizations on my newly launched image will be done through puppet, by associating the instance hostname with a node that is defined on the puppetmaster; I am also adding more entries to /etc/hosts as needed using puppet-specific mechanisms such as the 'host' type (as promised, blog post on this forthcoming...)
Note that you need to make sure you have good security for the web server instance which is serving your scripts to runurl; Eric Hammond talks about using S3 for that, but it's too complicated IMO (you need to sign URL and expire them, etc.) In my case, I preferred to use an internal Apache instance with basic HTTP authentication, and to only allow traffic on port 80 from certain security groups within EC2 (my Apache server doubles as the puppetmaster BTW).
Hey Grig, you wouldn't know it by the name or the docs but bcfg2 is a pretty nice alternative to Puppet. I saw a talk on it which enlightened me to why it's so nice: http://chipy.blip.tv/file/2558211/ Unlike other tools bcfg2 will "discover" configuration on remote servers and pull those definitions into the central config so that you can clone existing systems and also cope with manual changes (like apt-get, etc). As a bonus it's written in Python and seems to be pretty mature since it was written to manage the clusters at Argone national labs.
Hey Kumar -- thanks for the tip. I was aware of bcfg2, but a friend who tried it for a fairly large infrastructure rollout told me he really tried hard to use it, only to give up because he couldn't bend it to his will. Plus its config. files are XML -- blech. But I'll take another look at it.
Interesting. Admittedly, I haven't tried it myself but the two way reconciliation is something I could use for an upcoming project. They mentioned something in the talk about doing away with the XML config files. Not sure if that has happened yet though. I think you primarily configure stuff through a Django front end but I'm not sure.
Hey Grig, you might want to check out Chef: http://wiki.opscode.com/display/chef/Home
We use it to manage our ec2 infrastructure, and it's super-solid.
I found it interesting that you would directly compare a command dispatching tool like Fabric to a configuration management tool like Puppet.
We did a whitepaper on how these different types of tools can be used as an open source toolchain to achieve fully automated provisioning (going from bare metal all the way to running multitier applications)
The whitepaper is written about a specific ControlTier and Puppet implementation but the toolchain and the general concepts hold for tools like Chef, Fabric, Cobbler, Capistrano, etc.
Here's the link:
Damon -- I guess my intention wasn't clear. I didn't mean to compare Fabric to Puppet as an apples-to-apples comparison. I was merely wishing that a Puppet-like configuration management tool existed in Python. I know about bcfg2, I haven't played with it though.
Thanks for the pointer to your white paper.
Ah got it. You know, that does touch on an excellent subject... when you build a toolchain of management tools, what kind of overhead are you adding because of differing languages or configuration methods? It begs the question, is it easier to bend (or some would say abuse) one tool to do most of everything that you want or should use a toolchain that lets each tool do what its supposed to be good at?
Damon -- I know where you're going with this ;-) I checked out briefly the ControlTier wiki and it looks like a well though-out toolchain. However, the fact that it's all XML puts me off. I think that's the downside of tools that aim to be language-neutral. They end up using XML as their configuration language, and then they try to bend XML into a 'real' programming language, thus ending up reinventing the wheel badly. I'd rather use a language I like (Python in my case) as the glue around the various tools in my toolchain.
and therein lies the rub... ruby folks will have one tool, python folks another, Java another, Microsoft users will go another direction. Fragmenting both public tool efforts and private tool efforts (especially in mixed environments) can't be a good thing.
Regarding XML: Grig, XML (despite its name) isn't really a language (like e.g. Java, Python or Ruby) but only a syntax. As such, it has enormous advantages over other syntaxes - in particular that you can easily and consistently parse it in Ruby, Java, Python or indeed any language or runtime environment you care to name. It also has a whole galaxy of associated stuff: validators, query languages, modelling tools, auto-completing syntax highlighting editors, etc. This is why XML is objectively a better choice as the syntax for a configuration language. I know many people don't like XML for whatever reason, but as a standard syntax you can't go past it, frankly.
It seems to me that configuration management should really be declarative in any case. You should not need a "real" programming language (if I understand what you mean correctly). In fact there's a lot to be said for making it conceptually simple (as well as syntactically standard).
Con -- what you're saying makes a lot of sense theoretically, but as we all know "In theory there is no difference between theory and practice, but in practice there is". My point is that XML starts out as a 'simple' standard syntactically-verifiable configuration syntax, but in most cases it mutates into a monster that tries to do logic and does it badly.
If Puppet had tried to use XML for it manifests, I'd never have used it for example. It's refreshing to me that its author knew better than doing that.
You do not USE XML. Bcfg2 stores the results of python scripts AS XML. The XML is then how a configuration is diseminated. Bcfg2 is firmly python.
we have developed a server configuration management tool in python called metaconfig, if interested i can help you with getting started?
I think that it is an interesting discussion on what language to use to describe your environment, but ultimately which-ever language you choose will need to have comprehensive API hooks for all of the AWS Services, such as being able to attach EBS volumes, or register with an elastic load balancer etc. So far, I'm having trouble finding these modules for doing more than simply starting instances. Do they exist? Or am I expected to call command line tools in order to acheive these tasks?
So far bash scripting has served my needs, ala http://www.practicalclouds.com but I'd like to do things better! :)
I tend to interact with AWS services from my management node, which the node I use to initially launch a new instance and bootstrap it with Chef (or Puppet, but I use Chef these days). I have the AWS credentials stored on the management node only, for security purposes, and I am able to do everything I need there at instance launch time -- things similar to what you mentioned, i.e. registering with an ELB, associating EBS volumes etc.
So, I've got an actual question about using puppet on EC2. In your post you said:
The actual package and application installations and customizations on my newly launched image will be done through puppet, by associating the instance hostname with a node that is defined on the puppetmaster;
My question is, how do I do the association in the puppetmaster, since I won't know the hostname until the instance starts. I'd like it to come up an be configured automatically. Can I use the userdata to provide some alternate association mechanism? It seems like this problem must have come up before.
I've got my own home-grown scheme, but I'd really rather not try to re-invent puppet.
Hi Malcolm - I'm not using Puppet anymore, but from what I remember I was manually adding a node on puppet master corresponding to a hostname that I determined, then I was launching an EC2 instance and setting that hostname via the user-data bootstrap mechanism.
Hi there, I've been working on a script which will build a pretty good puppet master or client, which can be run when your instance boots up. It's Fedora (16) specific atm but could be adapted for others.
I'm hoping that this will make getting it all up and running that bit easier than at present. Let me know what you think.
Post a Comment