There’s something built into us that drives us to automate everything. Every manual thing we don’t have to do frees us up to do something more exciting and useful.

We’ve been experimenting over the last year or so with lots of tools that inch us closer to the holy grail of continuous delivery and automated provisioning. Puppet, Chef, Vagrant, Capistrano, Grunt, Travis CI – all of them quite amazing in their own right.

Recently we’ve been using Ansible and loving it. It’s named after a science fictional device that allows faster-than-light communication across infinite distances and it’s BDFL is Michael DeHaan who was behind Cobbler and Func. He has worked at Puppet Labs and Red Hat and has done an excellent job, in my opinion, of taking the best things from Puppet and Chef to create something much simpler and cleaner. He’s very responsive and the GitHub repo is extremely active.

We’ve used it on a big enterprise project that uses Nginx, Node.js, Redis, ElasticSearch and RabbitMQ on top of Red Hat. That’s quite a stack to manage across dev, staging, UAT and production environments. Especially in a controlled environment partitioned by firewalls and proxy servers.

A year ago, I wrote a set of Puppet scripts to provision this stack and it took me 3 weeks (getting it right for Ubuntu, CentOS and RHEL is a mission). It was complicated and I found the DSL uncomfortable so it didn’t really get updated as often as it should, rendering it almost useless quite quickly. It didn’t get a lot of love. And there wasn’t much reuse. I found the community fragmented. There’d be a ton of modules for each part of the stack and they’d all be a bit rubbish. So you’d have to fork, or worse, copy and modify to get anything useful.

So we tried Chef, using Librarian Chef to leverage work that others had done. That made it a load easier to get going. But I needed to tweak stuff. David’s excellent recent post about how to do this using wrapper cookbooks would have been a great solution for us. He used Berkshelf, which looks awesome. But I still find it unnecessarily complex.

The problem is twofold. Firstly, I want to know exactly how my production environment is configured. When something goes wrong I want to know where all the log files are, and that they’re all in the same place. And all the installs follow the same conventions, not a random selection of conventions from across the community. So maybe my goal of reuse is not actually so useful, because I’d have to dig in to each cookbook to understand it’s conventions and then maybe wrap it to bend it to be the same as the others. That’s a leaky abstraction which makes it too complicated already.

Which brings me to the second point. Which is that unless the scripts are bang up to date all the time, they become useless very quickly. That’s what happened to our Puppet scripts. If it’s harder to change the scripts than the environment itself, it’ll probably not happen. They become unloved. Everyone in the team has to be able to grok the scripts really quickly, so they can make changes without investing loads of time or fearing that they’ll break everything.

Ansible addresses these two issues for me. It’s simple and easy to understand. I can’t think of how it actually could be any simpler. It’s 100% declarative and in YAML too, which is minimal and to the point. Nothing more or less than is actually needed to do the job. Because it’s well thought out, it’s very expressive and easy to read. Even if you’ve never seen it before you can still grok what’s happening and adjust it straight away.

And there’s no leaky abstractions. The core modules are written in Python, but I’ve never had to dig into them. They’re small, idempotent, and do exactly what they say on the tin. That’s where the reuse comes in. The YAML layer on top is just direction: orchestration and configuration. So I was able to write the full stack in just a few days. And I know exactly what’s happening. My production servers are configured how I want them and I feel a lot better about them – knowing that there’s not a single thing that I haven’t put there myself. Again, nothing more and nothing less than what’s needed.

If you want to write your own modules (I haven’t needed to) you can write them in whatever language you want. There are only a couple of simple, Unixy, rules – they take stdin and return StdOut as Json, so it’s easy to write something in, say, Node to do more complex stuff. Just add a Hashpling at the top of the file to tell the OS what interpreter to invoke and you’re done. But there are core modules for everything – even one called rabbitmq_user (to manage users in rabbitmq, weirdly enough).

But the best thing about it all, is that it works over SSH, so you don’t have to bootstrap the box. It’s a push from your local machine. I can create a new instance in AWS and provision it without even logging in (well I do, just to register the host key, but that’s it).

And another good thing is the task tagging that allows you to run just part of your playbook. Puppet and Chef don’t have that [edit: they do, see comments. I wish I had discovered that before :-)], and it makes it massively quicker to develop scripts when you can target just the task you’re working on rather than having to re-run the whole lot.

I’m not going to go on effusing about every great feature in Ansible, but I have to mention the templating. In template files, it’s full Jinja2. But you get most of it throughout the YAML too – you can put handlebars anywhere – making it very flexible and powerful. Nice job!

Overall, I’ve found it a joy to work with, and look forward to a very bright future for it.

Sign up to Badger News