Dockerizing CLI Tools
If infosec has one thing, it's a lot of command-line tools. Tools with needs. Tools that are a little finicky to set up sometimes. Tools that require a bunch of dependencies that you're only ever going to need once. Tools with dependencies that hate the dependencies installed by other tools that results in a sort of Kessler effect of colliding packages resulting in the ragewipe and reinstall of your entire damn testing environment.
But the tools are also non-negotiable. Maybe we could do our jobs with curl
, vim
, and netcat
if we had to, but we would be much slower, charge much more, and most of us would quit in frustration. Finding a way to manage the demands of our tools is key. I know people who maintain an entire Linux VM because they finally got a wireless tool working on it and they're terrified they'll never be able to get it working again if they change anything.
This is the industry we chose, and we love her.
It's also why Docker exists. At least partially. I've been getting to know Docker a bit better lately and after migrating a few of my tools to Dockerfiles, I'm cautiously optimistic. This post will cover a bit of that journey and hopefully give you a bit of a leg up when doing your first Dockerization.
A Bit About Docker
Much ink has been spilled about Docker. I'm not going to rehash that beyond going over a few concepts that you need to understand for the rest of the post to make any sense. I'm going to start with a Dockerized tool and work my back to its genesis, because it actually makes a bit more sense that way.
Our tool runs in a container which was launched from an image, which is a sort of template on disk for containers (like a binary is to a running program) that was built using a Dockerfile.
I'm going to focus on the Dockerfile, because if you get Dockerfiles, you're well on you way as far as I'm concerned.
Dockerfile
The Dockerfile confused me at first because I didn't realize that it was the bridge between two worlds. A prime example of that is the COPY . /foo
command, which copies the files and folders in the current directory on the host to the /foo
directory in your image, rather than copying to the /foo
directory on the host.
You can think of a Dockerfile as a recipe for building your image. Let's look at a really simple Dockerfile I made for Aquatone.
|
|
The magic of this Dockerfile happens during the last two statements.
First, we create a root-level directory called share
. We could call this anything we want and store it anywhere, but it's convenient at the root and the name is descriptive.
The second statement governs what happens when the container is run. In this case, aquatone -out /share
.
What's so magical about that? Read on, dear reader.
Bash
I was having a hard time finding room for Docker in my toolchain until I realized I could just wrap everything in a shell script. This greatly simplified my workflow and actually helped clear up a bit of the confusion I was feeling about the whole host-container file sharing thing.
Let's walk through this line by line.
|
|
We need to tease apart the -v
a bit more, because our Dockerized tool is completely useless without it
Volumes
For our CLI tools to be useful, we need a way to pass data in (e.g. user lists, IP lists, etc.) and get data out (e.g. results). We do this by mounting a location on the host OS in our container (-v <HOST>:<CONTAINER>
) and then using that folder to share the files we need back and forth.
Aquatone takes an output directory as a parameter and, since we always want to save Aquatone's output, we can just wire that right into the ENTRYPOINT
statement ("-out", "/share"
). That way, whenever the container runs, Aquatone will dump its output to /share
. Always.
What changes between runs is where we mount the host side of that volume. The if
statement in our shell script ensures that, If we omit a directory, Aquatone's output will end up in our current working directory. If we specify a different directory, that directory will be used instead. Once the output directory has been handled, all other parameters are passed to our newly Dockerized tool.
If you're instead passing data into the container, the process is the same except you're going to specify the source directory for that data as your first argument. You also have to prefix the path to that data in the parameters you send to the tool. For example, toolname -U users.lst...
won't work, you need to do toolname -U /share/users.lst...
.
The final piece of this puzzle is to do a ln -sfv /path/to/aquatone.sh /usr/local/bin/aquatone
and you can now run the tool from anywhere.