Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BADF62A.6060102@bredband.net>
Date: Sat, 27 Mar 2010 13:12:26 +0100
From: "Magnum, P.I." <rawsmooth@...dband.net>
To: john-users@...ts.openwall.com
Subject: Re: bench.c fix for extreme c/s figures

On 03/26/2010 05:22 AM, Solar Designer wrote:
> On Thu, Mar 25, 2010 at 10:58:03PM +0100, Magnum, P.I. wrote:
>> Running under MPI, I experienced c/s rollover even from just a couple of
>> seconds of NT benchmarking.
>
> Did you exceed 4G hash computations in just a few seconds?  I think
> you'd need to have a hundred of nodes to achieve that currently - did
> you really have this many?

I overbooked just to test it, and used the virtual figures. IIRC it 
happened when running 5 seconds on 80 processes while three seconds just 
made it below overflow. I did the test because I thought it would be 
reasonable for the mpi patches to support lots of cores. For that matter 
I suppose running it for one minute on 8 cores would overflow just the same.

> I am avoiding "long long" in official JtR code so far, so I'd need to
> implement this differently.  Also, div64by32lo() returns a 32-bit value
> anyway.

The Jumbo patches use long long's anyway so I now fixed bench.c without 
the half-assed try to keep using the int64. I just uploaded fullmpi-6 to 
the wiki. Works fine on Linux 32 and 64, I hope it works for W.A too.

thanks
magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.