Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 15 Sep 2015 20:33:23 +0300
From: Solar Designer <>
Subject: Re: Judy array

On Tue, Sep 15, 2015 at 08:28:18AM -0700, Fred Wang wrote:
> This is *a* test.  It is "contrived" only so far as I chose a repeatable, fixed set of files.  The data was real, however, and is very representative of the types of files I process.

I don't know what magnum meant, by I agreed it's "contrived" in the
sense that for practical purposes a person wouldn't be cracking those
same 2M easy passwords out of the 29M over and over, and they won't care
much whether the first time they do it takes them 1 minute or 10 minutes
(so they can as well do it without --fork even, if they're on a low RAM
machine, and only use --fork once they're past the easiest passwords).

However, this test did also reveal other issues, which are also relevant
for longer runs.

> To load 26M (uncontrived, actual number) of bcrypt hashes into John takes, literally, many hours.  Not to crack, just to load.  Using my approach on bcrypt, I solved approximately 60,000 bcrypts in the time it took for John to load the file.

That's a different matter, albeit a related one in the sense that it's
also JtR's old defaults and lack of runtime heuristics.

It's probably the SALT_HASH_LOG issue that magnum and Frank mentioned.
You might want to try bumping SALT_HASH_LOG up, a lot.

> Improving the handling of these common cases will absolutely improve the performance of John, for virtually every hash type.

I agree.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.