Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120708093605.GB29763@openwall.com>
Date: Sun, 8 Jul 2012 13:36:05 +0400
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: optimized mscash2-opencl

On Sat, Jul 07, 2012 at 08:37:39PM +0200, Frank Dittrich wrote:
> On 07/07/2012 11:48 AM, Solar Designer wrote:
> > On Sat, Jul 07, 2012 at 01:31:06PM +0400, Solar Designer wrote:
> >> Actual run:
> >>
> >> $ ./john -i=alpha ~/john/contest-2011/hashes-all.txt-1.mscash2 -fo=mscash2-opencl -pla=1
> > [...]
> > 
> > I let this run until:
> > 
> > guesses: 67  time: 0:00:28:24 0.00%  c/s: 102413  trying: bara - choedia
> > Use the "--show" option to display all of the cracked passwords reliably
> > Session aborted
> > 
> > Somehow john.pot and john.log were not being updated during the run,
> > even though they should have been updated every 10 minutes. 
> 
> I didn't check the code. Could this be related to the fact that john was
> working in the same bara - choedia MAX_KEYS_PER_CRYPT block of passwords?

It could, but I think that would be a bug.

With the current design, this should in fact prevent john.rec from being
updated "for real" (with more than just the new running time and guess
count), but it shouldn't prevent proper updates to john.pot and john.log.

> Will john restart processing the bara - choedia block for all hashes, or
> does the john.rec file really store information about how many
> salts/hashes have already been processed for this particular block?

The former.

> I always thought this kind of information is not stored in a .rec file.
> In that case, john would have to do almost all the work already done so
> far again if you interrupt the session while it didn't finish the first
> MAX_KEYS_PER_CRYPT block. (The only speedup would be due to the reduced
> number of different salts)

Exactly.

This was not designed for large KPCs with slow hashes.  We may need to
solve this in some way.  Here are some ideas:

1. Do record info on salts tried / not yet tried for the current keys
block.  This is tricky to implement.

2. Have the KPC depend on salt count - that is, reduce it when the hash
is slow and we have a lot of salts loaded, so that we keep one block's
duration sane (e.g., no worse than the "Save" interval, which is 10 minutes
by default).  Issue: with truly a lot of salts, KPC may become so low
that the speed we're getting on GPU is worse than CPU's.  So we'll need
to impose a lowest allowed KPC as well, and disregard the original issue
(of not advancing to a next block soon enough) in those cases.

3. Invoke GPU kernels on multiple salts at once, thereby letting them
process fewer keys for the same amount of parallelism per invocation.
This is tricky to implement and it complicates program structure.

4. Similar to the above, but rather than have John explicitly make such
calls into a revised formats interface, have some formats within the
existing interface use some hacks.  A format implementation could cache
salts on GPU side (like what myrice is implementing for another reason)
and then process the keys (fewer of them) not only with the current
salt, but also with previously cached salts.  Further calls, for those
other salts, would then return/use already-computed results.  This is
also tricky to implement, and it is hackish.

I don't particularly like any of these options. :-(  That said, we do
need formats to support a variety of KPC settings (not a compile-time
constant only) for other reasons as well - such as because of the single
crack mode memory consumption issue, and for running with small
wordlists (may have fewer entries than e.g. MSCash2's 256k KPC).

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.