I've been doing a lot of work with EC2 instances lately, and I wrote some simple wrappers on top of the
EC2 API tools provided by Amazon. These tools are Java-based, and I intend to rewrite my utility scripts in Python using the
boto library, but for now I'm taking the easy way out by using what Amazon already provides.
After
downloading and unpacking the EC2 API tools, you need to set the following environment variables in your .bash_profile file:
export EC2_HOME=/path/to/where/you/unpacked/the/tools/api
export EC2_PRIVATE_KEY = /path/to/pem/file/containing/your/ec2/private/key
export EC2_CERT = /path/to/pem/file/containing/your/ec2/cert
You also need to add $EC2_HOME/bin to your PATH, so the command-line tools can be found by your scripts.
At this point, you should be ready to run for example:
# ec2-describe-images -o amazon
which lists the AMIs available from Amazon.
If you manage more than a handful of EC2 AMIs (Amazon Machine Instances), it quickly becomes hard to keep track of them. When you look at them for example using the Firefox
Elasticfox extension, it's very hard to tell which is which. One solution I found to this is to create a separate keypair for each AMI, and give the keypair a name that specifies the purpose of that AMI (for example mysite-db01). This way, you can eyeball the list of AMIs in Elasticfox and make sense of them.
So the very first step for me in launching and deploying a new AMI is to create a new keypair, using the ec2-add-keypair API call. Here's what I have, in a script called create_keypair.sh:
# cat create_keypair.sh
#!/bin/bash
KEYNAME=$1
if [ -z "$KEYNAME" ]
then
echo "You must specify a key name"
exit 1
fi
ec2-add-keypair $KEYNAME.keypair > ~/.ssh/$KEYNAME.pem
chmod 600 ~/.ssh/$KEYNAME.pem
Now I have a pem file called $KEYNAME.pem containing my private key, and Amazon has my public key called $KEYNAME.keypair.
The next step for me is to launch an 'm1.small' instance (the smallest instance you can get from EC2) whose AMI ID I know in advance (it's a 32-bit Fedora Core 8 image from Amazon with an AMI ID of ami-5647a33f). I am also using the key I just created. My script calls the ec2-run-instances API.
# cat launch_ami_small.sh
#!/bin/bash
KEYNAME=$1
if [ -z "$KEYNAME" ]
then
echo "You must specify a key name"
exit 1
fi
# We launch a Fedora Core 8 32 bit AMI from Amazon
ec2-run-instances ami-5647a33f -k $KEYNAME.keypair --instance-type m1.small -z us-east-1a
Note that the script makes some assumptions -- such as the fact that I want my AMI to reside in the us-east-1a availability zone. You can obviously add command-line parameters for the availability zone, and also for the instance type (which I intend to do when I rewrite this in Python).
Next, I create an EBS volume which I will attach to the AMI I just launched. My create_volume.sh script takes an optional argument which specifies the size in GB of the volume (and otherwise sets it to 50 GB):
# cat create_volume.sh
#!/bin/bash
SIZE=$1
if [ -z "$SIZE" ]
then
SIZE=50
fi
ec2-create-volume -s $SIZE -z us-east-1a
The volume should be created in the same availability zone as the instance you intend to attach it to -- in my case, us-east-1a.
My next step is to attach the volume to the instance I just launched. For this, I need to specify the instance ID and the volume ID -- both values are returned in the output of the calls to ec2-run-instances and ec2-create-volume respectively.
Here is my script:
# cat attach_volume_to_ami.sh
#!/bin/bash
VOLUME_ID=$1
AMI_ID=$2
if [ -z "$VOLUME_ID" ] || [ -z "$AMI_ID" ]
then
echo "You must specify a volume ID followed by an AMI ID"
exit 1
fi
ec2-attach-volume $VOLUME_ID -i $AMI_ID -d /dev/sdh
This attaches the volume I just created to the AMI I launched and makes it available as /dev/sdh.
The next script I use does a lot of stuff. It connects to the new AMI via ssh and performs a series of commands:
* format the EBS volume /dev/sdh as an ext3 file system
* mount /dev/sdh as /var2, and copy the contents of /var to /var2
* move /var to /var.orig, create new /var
* unmount /var2 and re-mount /dev/sdh as /var
* append the mounting as /dev/sdh as /var to /etc/fstab so that it happens upon reboot
Before connecting via ssh to the new AMI, I need to know its internal DNS name or IP address. I use
ec2-describe-instances to list all my running AMIs, then I copy and paste the internal DNS name of my newly launched instance (which I can isolate because I know the keypair name it runs with).
Here is the script which formats and mounts the new EBS volume:
# cat format_mount_ebs_as_var_on_ami.sh
#!/bin/bash
AMI=$1
KEYNAME=$2
if [ -z "$AMI" ] || [ -z "$KEY" ]
then
echo "You must specify an AMI DNS name or IP followed by a keypair name"
exit 1
fi
CMD='mkdir /var2; mkfs.ext3 /dev/sdh; mount -t ext3 /dev/sdh /var2; \
mv /var/* /var2/; mv /var /var.orig; mkdir /var; umount /var2; \
echo "/dev/sdh /var ext3 defaults 0 0" >>/etc/fstab; mount /var'
ssh -i ~/.ssh/$KEY.pem root@$AMI $CMD
The effect is that /var is now mapped to a persistent EBS volume. So if I install MySQL for example, the /var/lib/mysql directory (where the data resides by default in Fedora/CentOS) will be automatically persistent. All this is done without interactively logging in to the new instance. so it can be easily scripted as part of a larger deployment procedure.
That's about it for the bare-bones stuff you have to do. I purposely kept my scripts simple, since I use them more to remember what EC2 API tools I need to run than anything else. I don't do a lot of command-line option stuff and error-checking stuff, but they do their job.
If you run scripts similar to what I have, you should have at this point a running AMI with a 50 GB EBS volume mounted as /var. Total running time of all these scripts -- 5 minutes at most.
As soon as I have a nicer Python script which will do all this and more, I'll post it here.