Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4255c2571003100931ud5f34dehf2ec55f2052a262b@mail.gmail.com>
Date: Wed, 10 Mar 2010 10:31:54 -0700
From: RB <aoz.syn@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: Is JTR MPIrun can be optimized for more cores ?

2010/3/10 Solar Designer <solar@...nwall.com>:
>> but is also a clear indicator of the lack of communication between the
>> individual processes.
>
> Is it?  Assuming that only the "incremental" mode was in use, any
> communication between the processes wouldn't make much of a difference.
...
> negative, because removing the hashes has its "cost" too).  W.A.
> mentioned that the total number of hashes loaded was 2 million, so
> removing 8 thousand would really not make a difference.

Good point - it's still a variable, but not likely as significant as
I'd initially thought it would be.

>> Please be aware that the MPI patch by itself induces (as I recall)
>> 10-15% overhead in a single-core run.
>
> Huh?  For "incremental" mode, it should have no measurable overhead.

This is what my experimentation showed.  Whether it's the MPI
initialization or something else, the difference between patched and
non-patched was statistically significant on my Phenom x4.  I'll
repeat the tests to get more precise numbers, but it's why I made sure
it was optional.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.