Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [day] [month] [year] [list]
Date: Tue, 11 Sep 2012 10:26:08 +0200
From: Thireus (Thireus.com) <contact@...reus.com>
To: john-users@...ts.openwall.com
Cc: magnum <john.magnum@...hmail.com>,
 newangels newangels <contact.newangels@...il.com>
Subject: Re: MPI with incremental mode working fine?

Thank you very much magnum for trying to help me. :)

As said in my previous emails: I have this issue under john-1.7.9-jumbo-6 and john-1.7.9-jumbo-5, I have not tested other releases. I'm running MacOS.

I understand your point regarding mpiexec and restoring sessions with a different number of threads. That's a point I also noticed a while ago and explained in my article "John the Ripped – Steak and French Fries With Salt and Pepper Sauce for Hungry Password Crackers" --> Warning: Once the number of cores has been fixed for a session, don’t change it unless you know what you are doing. Because for sure it can break your work 
So I'm aware about it. And here I have never changed the number of threads.

We've got an interesting reply from Donovan here. It seems this issue could be related to gcc for OS X users only :-/. That's the one I'm using to compile john:

thireus$ gcc -v
Using built-in specs.
Target: i686-apple-darwin11
Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.11~28/src/configure --disable-checking --enable-werror --prefix=/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2 --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin11 --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~28/dst-llvmCore/Developer/usr/local --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)

Also, I cannot use OpenMP because it is not implemented for the custom algorithm I'm working with.

Thireus (contact@...reus.com), 
IT Security and Telecommunication Engineering Student at ENSEIRB-MATMECA & Master 2 CSI University of Bordeaux 1 (Bordeaux, France).
http://blog.thireus.com

Le 11 sept. 2012 à 09:07, newangels newangels a écrit :

> Hi,
> 
> I am also running John on Mac_OSX ( 1.7.4) & allready repport ( Last
> year i think ) that dupe's point when using MPI, the reason why i
> switch on OPenMP, you can test the E.winkler Build who are compiled
> with the last GCC ( speed up than the Apple GCC including in XCode
> package) or compile by your self in the case you install the last GCC
> Version ( not from Apple).
> 
> Regards,
> 
> Donovan

Le 11 sept. 2012 à 08:02, magnum a écrit :

> On 11 Sep, 2012, at 2:29 , Thireus (Thireus.com) <contact@...reus.com> wrote:
> 
>> Indeed, I know about the 10 minutes and unsaved sessions. But the problem is that duplicates show up at anytime, even after 24 hours cracking there are still duplicates. And what is strange is that there's no more than 1 duplicate... and I suppose using a lower timeout to save sessions will not help :-/ because it will just hide the problem which is that at least two threads out of 8 are just hashing the same generated passwords.
> 
> Very odd. What version do you use? 1.7.9-jumbo-6 or some newer from git? If the latter, what branch? Can you reproduce the problem in a new session?
> 
>> Do you know exactly how the password space is divided for all threads? I would like to know either if this distribution is made when the session is restored (once and for all) or if it is done once one of the thread has completed his work (redistributed). I mean is there a buffer of a big amount of passwords divided in 8 parts and each process takes one part or if there are in fact 8 buffers (fifo) filled by john ?
> 
> It's not really divided in the sense that you could eg. get rounding errors. It's more of a leap frog thing. Incremental is fairly incomprehensible (for me) but the way we do it is known to not produce duplicates - but has the drawback that the distribuition is imperfect. Node 0 will produce somewhat better candidates than node 7. That problem is worse the more nodes you run.
> 
> One way I could think of that would give problems is if you start a session using mpiexec -np X and later resume it using mpiexec -np Y, i.e. you can not change the number of nodes for a started session. Raising it would make john bail out, but lowering it would produce both misses and dupes, I think. But you would easily spot that with "ls -l john.*.rec" or by inspecting the log file.
> 
> magnum


Content of type "text/html" skipped

Download attachment "signature.asc" of type "application/pgp-signature" (496 bytes)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.