$ cat ec2-launch-instance-api.yml
---
- name: Create a new api EC2 instance hosts: localhost gather_facts: False vars: keypair: api instance_type: t2.small security_group: api-core image: ami-5189a661 region: us-west-2 vpc_subnet: subnet-xxxxxxx name_tag: api01 tasks: - name: Launch instance ec2: key_name: "{{ keypair }}" group: "{{ security_group }}" instance_type: "{{ instance_type }}" image: "{{ image }}" wait: true region: "{{ region }}" vpc_subnet_id: "{{ vpc_subnet }}" assign_public_ip: yes instance_tags: Name: "{{ name_tag }}" register: ec2 - name: Add Route53 DNS record for this instance (overwrite if needed) route53: command: create zone: mycompany.com record: "{{name_tag}}.mycompany.com" type: A ttl: 3600 value: "{{item.private_ip}}" overwrite: yes with_items: ec2.instances - name: Add new instance to proper ansible group add_host: hostname={{name_tag}} groupname=api-servers ansible_ssh_host={{ item.private_ip }} ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/Users/grig.gheorghiu/.ssh/api.pem with_items: ec2.instances - name: Wait for SSH to come up wait_for: host={{ item.private_ip }} port=22 search_regex=OpenSSH delay=210 timeout=420 state=started with_items: ec2.instances - name: Configure api EC2 instance hosts: api-servers sudo: True gather_facts: True roles: - base - tuning - postfix - monitoring - nginx - api
The first thing I do in this playbook is to launch a new EC2 instance, add or update its Route53 DNS A record, add it to an ansible group and wait for it to be accessible via ssh. Then I configure this instance by applying a handful or roles to it. That's it.
Some things to note:
1) Ansible uses boto under the covers, so you need that installed on your local host, and you also need a ~/.boto configuration file with your AWS credentials:
[Credentials]
aws_access_key_id = xxxxx
aws_secret_access_key = yyyyyyyyyy
2) When launching an EC2 instance with ansible via the ansible ec2 module, the hosts variable should point to localhost and gather_facts should be set to False.
3) The various parameters expected by the EC2 API (keypair name, instance type, VPN subnet, security group, instance name tag etc) can be set in the vars section and then used in the tasks section in the ec2 stanza.
4) I used the ansible route53 module for managing DNS. This module has a handy property called overwrite, which when set to yes will update a DNS record in place if it exists, or will create it if it doesn't exist.
5) The add_host task is very useful in that it adds the newly created instance to a hosts group, in my case api-servers. This host group has a group_vars/api-servers configuration file already, where I set various ansible variables used in different roles (mostly secret-type variables such as API keys, user names, passwords etc). The group_vars directory is NOT checked in.
6) In the final task of the playbook, the [api-servers] group (which consists of only the newly created EC2 instance) gets the respective roles applied to it. Why does this group only consist of the newly created EC2 instance? Because when I run the playbook with ansible-playbook, I indicate an empy hosts file to make sure this group is empty:
$ ansible-playbook -i hosts/myhosts.empty ec2-launch-instance-api.yml
If instead I wanted to also apply the specified roles to my existing EC2 instances in that group, I would specify a hosts file that already has those instances defined in the [api-servers] group.