I have an occasional need for a serious amount of GPU. This was a problem before BTC went all nutty and it’s even a bigger problem now. Buying a dedicated cracking rig is expensive even if you’ll be using it 24/7. When you’ve got occasional needs, it’s cost-prohibitive.
So I’ve been watching Amazon’s steady march towards high-powered GPU Compute instances with some interest. The P2 series was beefy, the P3 is insane. I initially wanted to do benchmarks against the
p3.16xlarge instance (8 x Tesla V100), but Amazon refused to give me one so I settled for playing with the
p3.8xlarge (4 x Tesla V100) and relied on other people’s benchmarks for the rest of my data.
I only ran a few of the benchmarks on my instance. As you can see, power scales pretty predictably so you can just half the values from the full
p3.16xlarge to find out what that would be on a
|Hash Type||8 x 1080||p2.16xlarge||p3.8xlarge||p3.16xlarge|
|NTLM||330 GH/s||136.4 GH/s||316.7 GH/s||633.7 GH/s|
|NTLMv2||13.1 GH/s||3.90 GH/s||15.6 GH/s||28.9 GH/s|
|TGS-REP||2.35 GH/s||0.74 GH/s||3.99 GH/s||8.07 GH/s|
Break Even Times
So how much cracking do you have to do to make an 8 x 1080 rig the more economical option?
p3.8xlarge: 670 hours
p3.16xlarge: 335 hours
As far as horsepower goes, the
p3.8xlarge is on par with the 8 x 1080 rig, while the
p3.16xlarge is twice as fast. I didn’t expect that to work out so nicely, but there you go.
But Wait, There’s More!
I started to get a bit nervous about turning on a $25/hour instance to do testing, so I created an Ansible playbook to build out Hashcat on a Ubuntu 16.04 box.