Single-System Builds with Ansible
For most of my Linux-using life I would blow up my install fairly regular. It was often quicker for me to just rebuild from scratch than back out whatever nonsense I had been experimenting with.
Then last year I had my adventures with the Pwn Plug and rage coded the beginnings of a project to build my own using Ansible. This quickly turned into proof-of-concept modular pentesting framework, and YAMS was born.
As these things often go, a few hundred hours into YAMS I realized that Ansible Galaxy existed, which did pretty much exactly what I was doing but obviously way better, so I mothballed that project and started moving all my roles there.
For The Uninitiated...
If you're familiar with Ansible, you can skip this part. If not, here's a really short primer. Ansible allows you to define infrastructure as code. I have yet to run into something not covered by one of its modules.
So pretty much any configuration you want can be defined in an Ansible Playbook. Great. Where things get magical, and where I saw YAMS fitting in, is the roles. A role is a collection of software and settings designed for reusability. Build it once, use it everywhere. How many ways are there to install Metasploit? Just one. Create a role for it and now installing Metasploit looks like this:
|
|
Yup.
Anyhow, Galaxy is a collection of these roles and comes included with Ansible. If you've got Ansible, you can use Galaxy.
We're a Little Different
As near as I can tell, Ansible is designed to deploy and maintain sprawling infrastructure. We're going to be using it to build a single machine, so we need to do a couple of things to lay the groundworkd for our hack workarounds.
First up, we're going need to create the following files: ~/.ansible.cfg
and ~/.ansible/hosts
.
ansible.cfg
This is obviously a configuration file. The only thing we need to do here is tell Ansible where roles will be living and where it should look for the details of the hosts it will be managing:
|
|
hosts
This contains a list of hosts that Ansible will be managing. I imagine that in real environments this gets quite complex, but ours is stupidly simple.
|
|
Any time you want to provision a new box using one of your playbooks, you'll need to make sure that the value of hosts
lines up with a heading in your hosts
file. In this case, that would be ansible_hosts
. We'll see this in practice later.
Installing Roles
So far so good. As I mentioned before, one of the really neat things about Galaxy is that you can use anyone's roles. This comes with the obvious caveat of make-sure-you're-not-installing-shells-by-mistake, but otherwise it's a marvellous thing.
Before a role can be deployed, it must be installed on the Ansible Control Machine. This is the one that will be running the deploy, not the one that will be on the receiving end.
For our purposes, there are two ways of making this happen.
Non-Galaxy
This method is pretty simple. You somehow need to get a valid Ansible role onto your system. You can clone someone else's Git repo, or maybe build your own. All that matters is that it can be found under roles_path
.
If you keep your things all over the place, you can always just symlink it there like so:
|
|
Galaxy
This way is a lot easier. First, find the Galaxy role you want to use. We'll use my Empire one for the demo.
Galaxy roles use the <author_name>.<title>
format, so my role is called leesoh.empire
.
We install it like so:
|
|
Magic will ensure and the role will end up on your system.
Build Your Playbook
Alright! We've implemented hack workarounds and have installed our roles, let's deploy! We do this using a playbook.
Our playbook is going to tell us three things:
- The hosts on which to act
- The remote user account to ssh as
- The roles to install
Let's assume you have a role of your own configured on your machine called my-little-pony-wallpaper
. This is just stored locally, for some reason, and then you've installed my Empire role from Galaxy. To deploy both of these to your target system, this is what your playbook should look like:
|
|
Save that puppy as pwnage.yml
.
Release the Beast
Finally we arrive! Deploying is easy as pie:
|
|
The --ask-become-pass
just tells Ansible to prompt us for our sudo creds since you're deploying those sweet ponies to ALL the users.
That's it! Your box is now provisioned.
Ongoing Care and Feeding
If you wanted to win extra devops points, you'd use your playbook every time you wanted to make a change to your system. If you needed new software, you'd build a role and then deploy it with Ansible. Most of the options also include parameters for updating, so rerunning the playbook will update the installed software as well.
The real beauty of this approach is that if you have a work testing machine, then one you use at home for CTFs, and maybe one you stuff into a cloud instance to catch those shells, you still just need a single playbook.
Merveilleux!!