Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLU0-SMTP155FF2D6556CA9AAD9B886CFDE00@phx.gbl>
Date: Fri, 22 Nov 2013 16:23:25 +0100
From: Frank Dittrich <frank_dittrich@...mail.com>
To: john-users@...ts.openwall.com
Subject: Re: Questions and suggestions to build a home cracking
 box. :)

On 11/22/2013 02:55 PM, Richard Miles wrote:

> On Wed, Nov 20, 2013 at 8:31 AM, Rich Rumble <richrumble@...il.com> wrote:
>> Yes HC is ahead of JtR in this regard, and they don't want to share
>> their efforts with us or anyone for that matter :(

As magnum pointed out, this speed difference is not an issue for slow
hashes, but only for fast hashes.
It's not that JtR developers are not aware of where the bottleneck is in
this case.
It's the data transfer bandwidth. (For slow hashes, this is not an
issue, because the time to compute the hashes is much higher than the
data transfer time anyway, so data transfer speed doesn't really matter.)

Addressing this problem would require some changes in JtR's architecture.
Instead of generating all password candidates on the CPU and just
computing the hashes on GPU, password candidate generation has to be
done (at least in part) on GPU, to reduce the amount of data to be
transferred.
Even with such an architectural change, it depends on the attack type
how much the change really helps.
Incremental mode, markov mode, and word list mode with a small number of
words and a large number of rules could really benefit from such a change.
Single mode and wordlist mode with no rules will hardly benefit from
such a change, if at all.

To mitigate these bottleneck problems, there is work in progress to
implement a mask mode, so that you just transfer the base words to GPU
and the GPU generates password candidates based on these words an the
mask. Depending on how your mask (and the number of password candidates
generated per word), data transfer will not longer be an issue, even for
fast hash algorithms.
It should be possible to combine mask mode with other cracking modes.

That said, I can only repeat what Brad already mentioned in an earlier
mail: You are probably overrating the importance of speed.

Unless you want to crack extremely hard passwords hashed with extremely
fast hash algorithms, where you are basically willing to more or less
exhaust the complete key space, speed doesn't matter that much.
Trying the most likely password candidates first usually gives much
better results than just using brute force.

1. Usually, you crack most of the passwords with very little effort.
The more time you spent, the fewer easy to crack passwords are left to
crack, the harder it gets, and the success rate will decrease dramatically.
So, even a 100 times faster speed might just translate into less than
one percent more cracked passwords, compared with the slower speed.

2. To be much faster than CPUs, GPUs have to compute a *much* higher
number of passwords in parallel.
This means, it is much harder to use smart, focused attacks on GPU.
Some attacks (single mode, word list mode with small word lists and no
rules or just a few rules) don't work well on GPU, because the total
number of password candidates for these attacks is much smaller than
what would be an optimal number of passwords to be tried in parallel.


Frank

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.