Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 27 Dec 2013 03:27:59 +0400
From: Solar Designer <solar@...nwall.com>
To: crypt-dev@...ts.openwall.com
Subject: Re: Password Scrambling

Christian,

Thank you for your prompt response!  The brevity is OK.

On Tue, Dec 24, 2013 at 11:52:46AM +0100, Christian Forler wrote:
> In the meantime we have improved our design. The time-memory trade-off
> of our current unpublished version ha T = O(G^4/S^3).
> 
> This means, having G^1.5 cores you need G^2.5 operations to
> compute a hash with G^1/2 memory. For G=2^20 you need
> about one billion (2^30) cores and 2^50 operations to compute it with
> 2^10 memory.

This is very nice.  Is this something you're able to prove (that no
better TMTO exists)?

> On 24.12.2013 01:49, Solar Designer wrote:
> > Not indexing by secret is very nice, but I have yet to see a O(N^2) KDF
> > that avoids indexing by secret.  With typical/expected uses of scrypt
> > (and bcrypt) the problem happens to be mitigated by the fact that the
> > salt is normally not known to an attacker unless the hash is also known.
> 
> I'm confused. %-/
> IMHO the hash and the salt are public knowledge, or at least should be
> treated as public knowledge.

In one of the relevant threat models (perhaps the most relevant one),
yes.  But not in all of them.

Let's talk about this one:

When the hash and the salt are known, offline attacks on them are
possible.  The remaining unknowns are the password and the internal
state at an earlier point in hash computation (before the final hash is
produced).  A timing attack may then be useful to determine this
internal state in order to speedup offline password cracking (allow for
early reject).  If we determine a few bits of internal state as of e.g.
1/10th of the computation of the hash of the correct password, we've
made offline password cracking almost 10 times faster (and we've
probably reduced the area-time cost of attack by more than 10x, since
the algorithm's memory usage is lower early on).

So I agree that there's some relevance of this attack.  A mitigating
factor (possibly insufficient) is that the attack will have to be
passive - on authentication attempts made with the correct password.
The attacker (usually) can't trigger such authentication attempts at
will (although there are exceptions to that).

Basically, hashing schemes with indexing by secret may be vulnerable to
local timing attacks (from another operating system account or VM on the
same machine) after the hashes have leaked.  Such timing attacks may be
used to assist in offline password cracking.

Thinking along these lines, we arrive at a new mitigation idea (I have
not seen it mentioned before[*]): start indexing by secret e.g. half way
through computation of a hash.  Then the attack will allow for a 2x
speedup only (speaking in terms of time rather than area-time for now).
In fact, this is already the case in scrypt. :-)

[*] This mitigation idea is similar to the one I had mentioned here,
albeit in different context:
http://www.openwall.com/presentations/ZeroNights2012-New-In-Password-Hashing/mgp00016.html

In area-time terms, unfortunately, some internal state leaks from scrypt
right after SMix's first loop may result in O(N) attack cost, despite of
the area-time cost of that loop (if fully implemented) being O(N^1.5).
That's because the attacker wouldn't need to store V_i if the second
loop is bypassed (if we get early-reject after the first few iterations
of the second loop).

So, yes, things are not great, and there's room for improvement here.
Simply introducing a third loop that would read V elements in a
non-secret order, to be run inbetween SMix's existing two loops, would
keep the attack cost at O(N^1.5).

If we couldn't do better than O(N^1.5) without indexing by secret
anyway, this would possibly be the best solution.  But it sounds like
you managed to do better.

> > In either case (salt not known or salt+hash are both known), the
> > side-channel attack is almost irrelevant.  It can be used to infer which
> > user account a password is being tested against, but nothing about the
> > password and the hash.
> 
> We currently working on a basic proof-of-concept implementation of an
> password sieve for scrypt. It is exploiting its cache-timing
> vulnerability. For you it might look irrelevant but the history of
> side-channel attacks have shown that most of them can be useful in the
> long run. Just saying.

I admit I had made an overly bold statement above, thinking that
realistically someone with a copy of the hashes would work on them
solely offline (like it's been happening so far, despite of e.g. bcrypt
theoretically allowing for similar timing attacks) and not bother
passively timing authentication attempts on the server (and possibly
having too few per account for the attack duration).  I would be
surprised to see someone mount that attack in the wild - not because it
is impossible (it can be demo'ed, and it's great that you're going to do
just that) - but because it is probably usually impractical, because of
a combination of reasons and constraints.

If we can deal with the attack without compromising on other properties
of the hashing scheme, this is definitely worth considering.  It sounds
like you may have achieved just that. :-)

For example, I oppose secret-dependent branching for that reason
(including e.g. in a discussion around SunMD5 years ago) - we can
achieve our desirable properties without that, so let's not take the
risk of having this extra side-channel.

However, if we have to compromise on real-world relevant security
properties in order to mitigate an attack that is unlikely to ever be
used in the wild and with actual benefit to the attacker (or loss for
the defender), then I am not sure this is worth it.  I wouldn't
currently trade O(N^2) with indexing by secret for O(N^1.5) without, at
least not unconditionally (supporting this as an option may be OK).

> We claim that Catena is the first scheme that has biuld-in support for
> both server relief and client-independent-updates.

This may well be true.

A possible improvement upon your server relief: allow for recovery from
a compromise of a sniffed hash by computing 2 (and so on) final hash
iterations on the server, and accordingly fewer on the client (the server
admin would bump this number when needed).  This is essentially S/Key.
Unfortunately, this reduces the memory-hard portion.

> > Does Catena drop a previously-available part of the salt+hash for the
> > memory cost upgrades?  Does it achieve the memory cost increase "for the
> > already-computed iterations", or maybe only for added iterations?
> 
> Only for the added. But to compute the additional iteration you
> need about the doubled amount of effort (memory and time) as for
> computing all the other iterations together. The cost per
> iteration doubles. To compute the i-th round you need O(2^i) memory and
> O(2^i) time.

Great, but this is implementable on top of e.g. scrypt, with similar
properties, except that the original password and salt are required due
to how scrypt is defined at high level (the PBKDF2 steps).  Right?

> There is a gap between specified and possible.

I agree.

> > Finally, the paper appears to use the word stretching to refer to both
> > iterations and throwing salt bits away.  IIRC, the Kelsey et al. paper
> > (reference [14] in the Catena paper) established the term stretching to
> > refer only to the iterations, and another term - strengthening - to
> > refer to dropping of salt bits.  
> 
> "Key stretching is about adding a specific number of bits to a keysearch
> or bruteforce attack of some kind." ([14], Chapter 2)
> 
> Iterations is only one way to implement key stretching.

OK, I may have recalled incorrectly (I did not re-check now).  I was
under impression that after that paper some folks started using the
terms stretching vs. strengthening to refer to these different approaches
(and stretching won).

> > Also, I previously saw the term pepper as referring to a possible
> > site-specific local parameter (site salt), whereas the Catena paper
> > uses it differently.  It could be preferable to stick to the way
> > these terms had been used historically.
> 
> "As advertized, our scheme is very simple. We deploy two salts, one
> public and one secret. The public salt is exactly the same as the
> current one." ([19], Chapter 3)
> 
> In our paper, we denote "pepper" as secret salt. If you really mind we
> can replace pepper by "secret salt".

No, "pepper" is fine.  Your use is not even all that different.

I use the term "local parameter" for the site-specific salt, so there's
no confusion here.

> We are eager to see all the other designs. We hope that there are tons
> of cool ideas.

Yeah.  I am even wondering if we maybe should propose a change to PHC
timeline, introducing an extra round of submissions so that there's a
chance for all to reuse ideas from others' submissions (with due
credit).  "Tweaks" are going to be allowed, but in some cases this could
be more than mere tweaks.

> I'm currently sick at home. :-(

Get well soon!

Thanks again,

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.