Reactive Hashing

I recently had a version of this idea proposed to me by the team over at FusionAuth: “Why not determine the type of hashing algorithm used on a given password, on the basis of the plaintext?” Quite frankly, it was one of those things that; the instant I heard it, it seemed so incredibly obvious. Here’s a bit of background…

A month or so ago I wrote a blog post about how I kinda super don’t like Bcrypt for a number of reasons. This blog post was considered to be quite controversial, if you’re interested you can check it out here: https://blog.benpri.me/blog/2019/01/13/why-you-shouldnt-be-using-bcrypt-and-scrypt/.

Essentially there are two major reasons why the usage of Bcrypt may detract from the overall security of your organization:

  1. Bcrypt may open you up to DOS attacks or otherwise waste resources.
  2. Over-reliance on Bcrypt, causes many organizations to weaken password policies massively impacting their overall security.

Here’s the rub. The max (omitting “chaotic” probabilities) amount of work required by any adversary to crack a given hash can be computed using the following equation: E=C^L * T for C=”Complexity of plaintext”, L=”Length of plaintext”, T=”Time to compute a single hash value.” For those of you who understand what is meant by these things, go ahead and skip to: “T has limitations…” Otherwise keep reading and we’ll cover exactly what these items mean for clarity.

C

Let’s start with C. When we’re talking about the “complexity” of a given plaintext password. What we mean by that is: “the number of possible values at any location in the plaintext.” Many websites will require you to input both an upper and lowercase character, as well as number in your password for exactly this reason. The idea is that if you increase the types/ranges of characters in the password, any adversary will have to guess through those characters to be able to successfully guess your password. The more characters to guess the less likely it is that an adversary will be able to guess your password correctly.

L

When we’re talking about the length of the password (L) we’re talking about how many “characters” long it is. In the equation C^L * T, increasing the value for L is the thing that makes the overall value of the equation grow the fastest (if you added some number N > 0 to C, or L, or T… add N to L to make the equation grow the most, given minimum values for C and T). If you had to pick between having a short but complex password, or a long and simple password, you would almost certainly want to go for the long and simple password, as it would tend to be harder to guess…

T

Finally, the most enigmatic of all of these values is the value for “T”. To put it in plain English: T represents how long it takes to compute the hash value for a given password. This matters when it comes to determining how long it would take for an attacker to brute force a given password value. If an attacker has to make 1000 guesses to get the right answer: a hashing algorithm that takes 1/1000th of a second to compute means he will get the right answer in a second. But if the hashing algorithm takes a second per run, he won’t get the right answer for over 15 minutes.

T has limitations…

There is however, a very real difference between C/L, and T. C/L represent the number of possible passwords the attacker might have to guess to get the right answer. T represents more of a logistical hurdle placed upon every guess. Because of this distinction it is very important to note that to increase the value for T has some very real limitations.

And this is my issue with Bcrypt. The whole point of Bcrypt is to increase the value of T in order to slow down an attacker trying to brute force a password hash. Up to a certain point, this is genuinely valid; most modern hashing algorithms are designed to be as quick as possible for efficiency’s sake. However (and this is something I don’t think many people understand) there is a very hard ceiling on what Bcrypt can and cannot do.

I want to demonstrate this by running some numbers… Let’s generate the E values for a passphrase (L driven growth) and a regular password protected by some abstract “slow hashing algorithm” (T driven growth). Most specifically, I want to know how big of a T value we would need to have, in order to match a reasonable passphrase in strength.

Here’s what we do. First we build our model passphrase. Let’s say it’s 30 characters long, alphanumeric (lowercase only). So we have an L of 30 and a C of 36. For our T value we’ll use a baseline of 1 and set the upcoming “slow hashing algorithm” as a multiple of our baseline… so our T is equal to 1 for simplicity.

So our E value for our passphrase is E=36^30 * 1 or E ~=4.89 * 10^46. That’s a huge number. That’s the amount of guesses an attacker would need to make in order to (for sure) crack our model password. It’s very unlikely that an attacker would ever guess our password… astronomically so. How we managed to achieve that reality, was by using a long password, increasing our L value to increase our overall E.

So the question becomes, how big of a T do we need for our “slow hashing algorithm” (abstract Bcrypt) to get the same level of security? Let’s model a new password, one that isn’t a passphrase… A “normal” password. The kind that algorithms like Bcrypt are supposed to magically protect. I won’t even make it a weak password, I’ll be gracious. Let’s say, alphanumeric (lowercase)… for a length of 10. That’s a middle of the road real world example of a password model.

This model would give us a C of 36 (as before) and an L of 10. Now, in this situation, we don’t yet know our T as we’re attempting to solve for an unknown T. So we’ll put T in as a variable, giving us E=36^10 * T. Next, to scale our T to make this password as strong as the last, we’ll set this equation equal to the last, like so: 36^35 * 1 = 36^10 * T.

Now all we have to do is solve for T, which gives us: 36^25 = T. Now, let me try to explain how jaw droppingly large of a problem this is… Remember how we set our “baseline” T at 1? You might imagine then that we substitute that baseline = 1 for “the time taken to run SHA256” = 1. In this case, the math clearly shows that in order to “secure” a normal password with the same level of strength as a simple passphrase, using the “slow hashing algorithm approach”; you would need to use astronomically slow hashing algorithms. For our example, you would have to run an algorithm that was equivalent to 808281277464764060643139600456536293376 runs of SHA256 EVERY SINGLE TIME THE USER LOGS ON!!!

This is clearly insane.

Look, there’s definitely something to be said about having some reasonable baseline for T > 1 such that resources aren’t eaten up on your server, but you do manage to make an attacker’s day significantly harder. And as long as you’re not weakening your password policy because “Bcrypt makes us safe” I’m not gonna complain. This is where Reactive Hashing comes in…

Reactive Hashing Explained

The idea here is that you dynamically set the value for T based on an analysis of the plaintext password. Somewhat similar to what we did above in our modeling exercise. Except obviously, in order to avoid the same situation our exercise left us in, you would need to set reasonable limits on T. T can never grow too large. If your T value grows so large that your users are waiting longer than a second to login to your application, you’re already in the danger zone.

But there’s no reason why you couldn’t dynamically change the value for T (or even C and L for that matter) withing constraints, to bolster the strength of stored passwords. Remember, passwords are not “secured” by the algorithm they’re hashed with… If someone’s password is “password” you could hash it via Bcrypt, and it’ll still be cracked within a second.

Our goal should be simply to manipulate the underlying characteristics of the password to raise our E value above a threshold past which we assess no attacker is likely to ever successfully brute force it. I’ve been experimenting with this on my new website https://SecRate.io using a dynamic password policy to change the complexity requirements based on the length of the user’s chosen password.

I’m also considering working on an open source library to assist developers in implementing these ideas into their next project. If that sounds interesting to you, give me a shout either on twitter @zaeyx or via ben@literallywhatever.com.

P.S. Since I didn’t mention it anywhere in this article, how we “dynamically” set the time value is literally by chaining together a hashing algorithm like SHA256. We just run SHA256 in a chain to take up more time… for example, if we wanted a T = 3 we could write code like: h = sha256(sha256(sha256(password))).

Make sense?

About the author

Professional hacker & security engineer. Currently at Google, opinions all my own. On Twitter as @zaeyx. Skydiver, snowboarder, writer, weightlifter, runner, energetic to the point of being a bit crazy.

Comments

  1. Okay, I assume you are familiar with side channel attacks? Timing attacks specifically. Your reactive hashing has a big timing leak.

    A passive attacker sniffing the network can measure the timing between user sending a password, and the server responding back. If the server uses a slow hash, it will take more time, and the attacker may notice this and deduce that the user used a weak password. Since slow hashes typically take several hundred milliseconds, one round trip may be enough to make a good guess. Then, when the attacker steals the database, they will be able to concentrate on the weak passwords.

    If you used a slow hash for everyone, the attacker would have to waste resources trying every single user before they stumble on a password weak enough to be cracked. (Assuming the database salts the hashes of course, but that’s a given.)

    One more thing: you can’t change C or L. Those are under the control of the user. You can nudge them in the right direction with password rules and stern warnings, but some will still go out of their way to reduce the entropy of their password, so it is easier to remember.

    1. You are correct that a naive implementation would have a timing attack which may give a relative value for how strong the hash was. But this is very easy to fix by simply throttling responses! Perhaps I should mention this… thank you.

      I think that you can change C and L by use of password policies which analyze the plaintext to ensure they are strong.

Leave a Reply

Your email address will not be published. Required fields are marked *