Automating deployment of Drupal using Ansible

Learn how to deploy your Drupal site to production using Ansible

In the previous post, we created and booted a fully dockerized Drupal setup. We will be using Ansible to automate the whole deployment process from start to finish.

Why Ansible

Primarily because I’m a huge fan of Ansible. It is agentless, has a great ecosystem, the YAML syntax is simple to read, understand and maintain(honestly, sometimes it is tiring to figure out what exactly is happening). This could be automated using any other provisional tool like Chef or Puppet as well.

The other decision we will be taking is, making the Ansible playbooks a part of our codebase. It will live alongside our Drupal code. Having your infrastructure and deployment as a part of your code is considered an industry wide good practice. All in all, this will be a self contained repository with both the code and instructions on how to deploy it. It is still not technically 100% infrastructure-as-code setup, as we only have the provisioning scripts checked in, not the code to spin the actual servers. The playbooks assume that the servers are already there with docker and docker compose installed and having ssh access.

This setup makes the deployment process both consistent and repeatable as well. Any developer in your team with necessary permissions can run the script and get the same results every time. Also, when the build fails, it fails loud and clear so that you know where it messed up.

Some limitations

I’d like to put up the limitations of this setup before we dive in. For simplicity’s sake, I don’t guarantee a rollback for this process. If for instance, you do a deployment and it fails, and you want to rollback to previous state, you have to do it manually. The setup has no provision to rollback to previous state. It does store DB backups. That said, it wouldn’t be too difficult to add a rollback mechanism with the tag rollback and some parameters, like what commit to rollback to, which DB to reset to etc.

There is a small downtime when the older containers and brought down and new containers are built and brought up. It is not a big concern if you are a small to medium site. If you are traffic heavy, you need to take steps to prevent this. I’m working on alternative solutions for this one and open to suggestions.

The nature of Drupal is in general hairy to do 12-factor stuff like rollbacks, zero downtime deployments etc.

what steps to run

An important precursor to automating is to document and have a script for each step. Fortunately, we have most of them that way for our stack. We can divide our tasks into 2 broad categories,

  • stuff we do on a one-time basis when we setup the system, Ex: creating DB backup directories
  • stuff we do for every deploy, Ex: running DB updates via drush

Ansible has the concept of tags which we will exploit for this purpose. We define 2 tags for the above purpose. One called setup, another called deploy.

List of setup only tasks:

  1. Create a directory for DB files to persist
  2. Create a directory for storing DB backups
  3. Create a directory for storing file backups

List of tasks for both setup and deployment:

  1. Create a backup of files and DB
  2. Clone the correct code, i.e. specified branch or bleeding edge
  3. Create .env file(this is not checked in, needs to be created)
  4. Build and boot latest containers for all services
  5. Run composer install(Drupal specific)
  6. Run DB updates(Drupal specific)
  7. Import config from files(Drupal specific)
  8. Clear cache(Inevitably Drupal specific)

security considerations and playbooks

It is important to secure your servers before you deploy the application. This is an Ansible playbook as well which can be used for any web application, not just this stack. Also, when running the playbook, you will be storing a lot of sensitive information, like DB credentials, the SSH key pair and the server user credentials. If we follow infrastructure-as-code paradigm, it is not practical to checkin this as part of your code. Ansible helps us to store them in an encrypted fashion by taking in a password to encrypt or decrypt it. This password can be supplied by a user prompt or from environment variables. We will use the latter approach so that it is easier to fully automate it in future.

We will use the Ansible template module to create the nginx configuration file and .env files.

- name: Create .env file
    src: "templates/dotenv-{{ env }}.j2"
    dest: "{{ project_path }}/.env"
    owner: deploy
    - deploy
    - setup

non prod environments

This setup allows you to easily create production replicas, or non production environments easily. This is where the script shines. You have to create the following environment specific changes:

  1. Where your codebase will live in the servers. The environment won’t share the code for obvious reasons.
  2. Env specific credentials, like DB etc.
  3. The environment name
  4. The environment specific domain, like

I’ve written the script to support a prod environment and a non prod dev/staging environment. You can extend it to as many environments as you want. This feature is handy if you want to:

  1. Showcase a new feature to a client
  2. Reproduce a production bug and fix it
  3. Test an unshipped feature

NOTE You have to make sure to keep off search engine crawlers from crawling your non production sites. I thought of using .htaccess password protection to achieve this, but ditched that approach in favour of editing the robots.txt rules.

Here’s how I do it in the playbook:

- name: Update robots.txt to disallow search engines for non prod site
    src: "templates/robots.txt.j2"
    dest: "{{ project_path }}/web/robots.txt"
    owner: deploy
    - deploy
    - setup
  when: env != "prod"

And the contents of my non prod robots.txt,

User-agent: *
Disallow: /

Yeah, keep off my staging site you greedy crawlers! (Crawlers respect that.)

Running ansible

If you are running the deployment setup for the first time, run the setup tags first(assuming you’ve secured your servers and have docker and friends installed).

First, set the vault password in your shell.

$ export ANSIBLE_VAULT_PASSWORD=supersecret

$ ansible-playbook -i "," playbook.yml  --vault-password-file=./vault-env --extra-vars "env=dev" --tags setup

of course, you should have Ansible installed on your local for that to work. The -i specifies the inventory, the machine where the script will run, followed by the location where I should find my vault decryption password(otherwise Ansible will fail trying to decrypt it unsuccessfully). Notice that I inject the environment from commandline and I run the setup related tasks first. This will setup and deploy your site for the first time.

Once you make changes and want to deploy, you run,

$ ansible-playbook -i "," playbook.yml  --vault-password-file=./vault-env --extra-vars "env=dev" --tags deploy

This will run only the deployment related steps. You have successfully created a one step build and deploy process for your Drupal site. Now if only I had the whole thing run when I do a git push, or perhaps I push to master, and production deployment happens and when I push to a dev branch, it deploys to staging or something similar. We are talking about continuous delivery here, the holy grail of any agile team. That will be the subject of the next post!

docker Drupal Planet