Ansible for Webapps
Photo credit: Unsplash: galen_crout
I’ve been using Ansible more and more. It is a really great way to manage server configuration. As I’ve transitioned away from Chef, I’ve been working on establishing some patterns that seem to work well when setting up servers for deploying web applications.
Directory Structure
When using Chef, I would always create a git repository just for managing servers, with a combination of recipes and node files to cover all the various aspects of a projects infrastructure.
For Ansible, I’ve found it easiest to just store the configuration alongside the application code. The bulk of the configuration goes in a deploy
directory, with an ansible.cfg
at the root.
app_root
|- ansible.cfg
|- deploy
|- galaxy-roles
|- .gitkeep
|- group_vars
|- all.yml
|- hosts
|- requirements.yml
|- roles
|- app
...
|- setup.yml
Configuration
To start, we’ll run through the various base configuration files for the setup. This setup works well to manage both custom roles created for the project, but also roles downloaded from Anisble Galaxy. The setup also explains how to deal with a mixture of servers with different deployment environments like staging and production.
ansible.cfg
[defaults]
inventory=deploy/hosts
remote_user=ubuntu
roles_path=deploy/galaxy-roles:deploy/roles
[ssh_connection]
ssh_args="-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=5m"
pipelining=True
scp_if_ssh=True
This file sets up some basic configuration for my setup. The inventory of servers will default to the file deploy/hosts
, and by default we will connect as the ubuntu user (common with AWS and Vagrant).
Then I specify the directories for roles to be installed in. First I specify the deploy/galaxy-roles
directory, so that when I install roles from the Ansible Galaxy, they install into this directory, and then we have deploy/roles
which is where we will put roles we write just for this deployment.
In the ssh_connection section, we set some defaults that we want. The one unusual thing here is the scp_if_ssh=True
which is necessary because the SSH hardening role disables sftp, so we want to use scp for file transfers.
hosts file with various environments
[local]
vagrant_local_vm ansible_host=127.0.0.1 ansible_port=2222 ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
[staging]
staging ansible_ssh_private_key_file=~/.ssh/aws-key.pem
[prod_web]
prod_web_1 ansible_ssh_private_key_file=~/.ssh/aws-key.pem
[prod_db]
prod_db_1 ansible_ssh_private_key_file=~/.ssh/aws-key.pem
Here we set various environments and the servers within those groups. We do this so we can filter the inventory we are applying a playbook on when we invoke ansible-playbook
For each server, if there a specific connection parameters we need, we apply them for each node. I generally set these hostnames in my ~/.ssh/config
so that I don’t need to set IP addresses or some other configuration parameters when I want to ssh to these machines later.
group_vars
Here I set specific values for the roles I’m pulling from Ansible Galaxy. I’ve found that many roles provide nice tables of all the default variables and options you have, and I keep this file neat by alphabetizing the variables as they are usually prefixed by the role they are part of.
Galaxy
As defined in ansible.cfg
the roles path begins with galaxy-roles
, so when we run ansible-galaxy -r deploy/requirements.yml
, they install into this directory.
git
I add a gitignore deploy/.gitignore
to keep some files out of git within the ansible deploy directory. I add a .gitkeep
file to the galaxy-roles directory to ensure we’ve got the folder.
*.retry
galaxy-roles/**
!galaxy-roles/.gitkeep
requirements.yml
Here is a sample of some roles I’ve been using:
- src: franklinkim.newrelic
version: 1.6.0
- src: geerlingguy.nginx
version: 2.1.0
- src: dev-sec.ssh-hardening
version: 4.1.2
- src: https://github.com/ANXS/postgresql
version: 9446ab512ff2a7c7bae21bc7ebba515192809433
- src: jnv.unattended-upgrades
version: v1.3.0
- src: geerlingguy.ntp
version: 1.4.2
- src: smartlogic.github_keys
version: 0.1
These can be installed when you checkout the repo using
$ ansible-galaxy -r deploy/requirements.yml
Playbooks
Depending on the situation, I can create a playbook file for various situations. The most common is a setup.yml
Main Setup
---
- name: Basic Setup
gather_facts: yes
hosts: all
roles:
- { role: dev-sec.ssh-hardening, become: yes }
- { role: deploy_user }
- { role: smartlogic.github_keys }
- { role: jnv.unattended-upgrades, become: yes }
- { role: postgresql, become: yes }
- { role: geerlingguy.nginx, become: yes }
- { role: geerlingguy.ntp, become: yes }
- { role: app }
- name: Production servers get newrelic
gather_facts: yes
hosts: prod_*
roles:
- { role: franklinkim.newrelic, become: yes, tags: ['newrelic'] }
The basic setup is applied to all hosts. It is a mix of roles from Ansible Galaxy and ones local to this repository. The second play applies to all servers that match the pattern prod_*
, so that servers that begin with prod_
in the deploy/hosts
file get those roles. In this case we do that to only install the NewRelic server monitoring into our production servers.
Others
Other files might be deploy.yml
or migrate.yml
with roles and variables set for those actions. More complex examples of these is probably another post of its own.
Reuse and Roles
With this setup, any shared behavior between various deployments you may do should be handled through a shared role in git or Ansible Galaxy. That way you don’t do any copy and paste between setups you manage.
The few roles that are specific to your web application would be in the deploy/roles
folder, and should be very specific to the application you are deploying. Examples of this might be creating user accounts or the specific nginx configuration files for your service.
Sample custom role
A simple task to template our nginx configuration based on SSL certificate presence:
- name: Check for app certs
stat:
path: /etc/ssl/private/app.crt
register: app_cert
become: true
- name: Template app no-ssl version
copy:
src: app.nossl.conf
dest: /etc/nginx/sites-available/app.conf
owner: www-data
group: www-data
when: app_cert.stat.islnk is not defined
notify: restart nginx
become: true
- name: Template app ssl version
copy:
src: app.ssl.conf
dest: /etc/nginx/sites-available/app.conf
owner: www-data
group: www-data
when: app_cert.stat.islnk is defined
notify: restart nginx
become: true
Overlapping deployments
One thing I’ve considered with this setup is that if you have a more service oriented architecture, you might want to use a common Ansible repository that is shared for all your various services. I think that might make sense at some scale.
For most of what I do, simply creating reusable and shared roles keeps the amount of duplication between setups minimal. Yes, if you run two project’s Ansible playbooks, there might be a lot of duplication in what actual tasks are run, but they should be idempotent and guarded by the right checks to prevent duplicate work.
Conclusion
So far this setup has worked well for me. I’m sure it will continue to evolve and I do more and more with Ansible, but the end result is that I have a simple configuration along side my application code that is making setting up and maintaining server easier than it has ever been.