Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 26 Mar 2010 07:22:31 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: bench.c fix for extreme c/s figures

On Thu, Mar 25, 2010 at 10:58:03PM +0100, Magnum, P.I. wrote:
> Running under MPI, I experienced c/s rollover even from just a couple of 
> seconds of NT benchmarking.

Did you exceed 4G hash computations in just a few seconds?  I think
you'd need to have a hundred of nodes to achieve that currently - did
you really have this many?

> The problem was only when benchmarking, not 
> real cracking. I identified it as due to the sloppy conversion in this 
> line in benchmark_cps() in bench.c:
> 
>     tmp.lo = count; tmp.hi = 0;

It was assumed that "count" could be 32-bit anyway (on some builds it
is) and that it would be relatively small (which it is on single CPU
core runs so far).  However, I was using a 64-bit integer variable for
some further computation involving multiplication by clk_tck.  Then it
is once again assumed that the c/s rate for a cipher fits in 32 bits.

> Enclosed is the dilettant take I made to make it better (along with some 
> more scaling, to mega and giga c/s) which was included in fullmpi-5. 
> This should apply more or less cleanly to standard john too.

I am avoiding "long long" in official JtR code so far, so I'd need to
implement this differently.  Also, div64by32lo() returns a 32-bit value
anyway.

BTW, I am considering changing math.[ch] to implement 128-bit or maybe
even bigger integers.

> This fix 
> works just fine on my gear, but W.A's OSX system (using 64-bit longs and 
> presumably the same for ARCH_WORD) now reportedly produces figures of 
> "4294M c/s" for both real and virtual and for all and any formats, 
> whether fast or not and even when running on just one node.
> 
> I'm willing to admit I am the worst coder on earth but while I recognize 
> that figure, I don't really see what went wrong?

It looks like div64by32lo() detects overflow of its 32-bit return value,
which happens when tmp.hi is higher than time.  I am puzzled as to why
this would be happening on W.A.'s system in all cases, although I
haven't reviewed your patch other than bench.c.patch.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.