Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140128004637.GB31542@openwall.com>
Date: Tue, 28 Jan 2014 04:46:37 +0400
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: Handling of hashes with different iteration counts

Frank, magnum -

On Tue, Jan 28, 2014 at 01:10:58AM +0100, Frank Dittrich wrote:
> Any suggestions what to change?

Iteration count is just a special case of tunable cost.  We already have
scrypt, which accepts three tunable cost parameters: N, r, p.  Maybe the
format should export these as up to two - t_cost and m_cost (as they're
called in PHC) - even if the underlying hash/KDF is more flexible.  For
the scrypt example, as long as our implementation doesn't use TMTO and
doesn't use scrypt's internal parallelism (since we've got plenty due to
multiple candidate passwords), the format may export N*r as m_cost and
N*r*p as t_cost.  And we'd need command-line options, similar to --salts,
which will let us choose subsets of hashes based on ranges of these
values (in fact, combined with --salts, this will be up to three ranges
that the chosen hashes must or must not fall within).

Your array of functions idea could be good, but I think we'll need
exactly two, so maybe just make it two functions, t_cost() and m_cost().
Many future password hashing schemes are likely to have more than 2
tunable parameters (and more than scrypt's 3, too), but for the purpose
of choosing which hashes to focus attacks on, we may translate those
many parameters into t_cost and m_cost for _our_ attacks.

t_cost shouldn't be in real time units, but rather in some fixed units -
e.g., for bcrypt it will be exactly the cost parameter specified with
hashes.  I think it's OK to keep it the base2-logarithm of actual cost,
since that's how bcrypt is defined.  Or do we prefer to make it linear,
so that regardless of hash type if one doubles the acceptable t_cost,
they should expect the c/s rate to be roughly twice lower?  That would
make sense to me, too.  (Of course, actual effect on c/s rate will vary
depending on actual t_cost's distribution across available hashes.)

On the other hand, it'd be nice if our m_cost will be in bytes or KiB,
and will not include build-specific memory allocation overhead.  That
way, it'd be convenient to choose only hashes that fit in RAM, yet the
same command-line will result in consistent behavior across systems.

We could also have a just.conf setting that will specify the maximum
memory size that JtR is permitted to allocate, and its implementation
could use formats' exported m_cost in order to detect would-be-excessive
memory usage without actually reaching the threshold.  (I briefly
mentioned this idea in a discussion with Alexander Cherepanov before.)

https://password-hashing.net/call.html

"The t_cost and m_cost arguments are intended to parameterize time and
memory usage, respectively"

BTW, there are interesting discussions going on here:

http://news.gmane.org/gmane.comp.security.phc

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.