The Terraform SSH Module
I've been revisting Terraform lately and have worked through a few of the awkward bits I ran into on my first time around. Near the top of this list was SSH key management.
I'd always encountered SSH keys provisioned along with the compute resources you'd be accessing with them so I just configured them like any other variable. This worked well enough until I added a second compute resource and started debugging error messages about keys that already existed.
I was treating SSH keys as a VM-specific resource, but Amazon and DigitalOcean both approached the matter differently.
It's the Service, Stupid
Amazon and DigitalOcean both associate an SSH keypair with the service first, and then the instance. If you provision box1
with keypair1
and then try to provision box2
with keypair1
, you'll get an error that the key already exists, which is totally unsurprising with the benefit of hindsight.
Both of these services treat keypairs as a general security feature of your account. New instances are simply associated with these keypairs during provisioning.
The SSH Module
Enter the SSH module. I'm going to focus on AWS here since that's the page I've got up in the other window, but there are very few moving parts here so you'll be able to apply this elsewhere with minimal fuss.
Here's what a fully-configured resource looks like:
|
|
That quickly becomes this:
|
|
Create an environment named "aws-ssh" or something descriptive and run init/plan/apply. You'll get some output like this:
|
|
This is helpful, since the fingerprint is how you'll tell your provider which key to use. If you need to access this output again, you can just run terraform output
from your environment to get it.
One More Thing
There are two ways you can provide your key to your SSH module. The first is to simply paste in the contents of id_rsa.pub
into your config. The second is a bit more elegant:
|
|
This loads the keyfile from disk first, then passes it to the module for provisioning.
Nice!