Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B9B511F.9030104@bredband.net>
Date: Sat, 13 Mar 2010 09:47:27 +0100
From: "Magnum, P.I." <rawsmooth@...dband.net>
To: john-users@...ts.openwall.com
Subject: Re: problem with initializing mpirun extended

I wrote:
> websiteaccess@...il.com wrote:
>> Hi
>>
>>  I use a freshly JTR 1.7.5 mpi-10 extended compiled, with jumbo patch 
>> 2. I built my own rules (thousand rules)
> 
> 
> The mpi10 patch only supports incremental mode.

Um, sorry. By "mpi-10 extended" you meant the fullMPI-3 patch? Let us 
call the "old" incremental-only patch "mpi10" and my recent patches the 
"fullmpi" ones, to avoid confusion. My current patch should be called 
"fullmpi-3".

>> 0 0:00:00:00 - loading wordfile 
>> baseavecreglemultilanguesESSENTIEL_UTF8.txt into memory (4156374 bytes, 
>> max_size=5000000)
>> 0 0:00:00:00 - wordfile had 536867 lines and required 2147468 bytes for 
>> index.
>> 0 0:00:00:00 - 748892 preprocessed word mangling rules
>> 
>>  After 10 hours, JTR is still working.....
>> 
>>  0 0:10:35:50 - Rule #465576: '>2 <9 :: Az,010449,' accepted as 
>> '>2<9Az,010449,'
>>  0 0:10:35:50 Session aborted
>> 
>>  Before, I used JTR without MPI-10, I did not had this problem (same 
>> dictionary, same rules). It took some seconds (generally instant) 
>> before beginning cracking.
>> 
>>  PS: each time I run JTR MPi-10 (-w:dict) I have to wait :(

I'm not sure I follow. After 10 hours, it has processed almost half a 
million rules. What do you mean "before beginning cracking" ?

There is one thing though, maybe it should be clearly stated in the 
docs. The MPI patch seems to make the buffering of logging more 
noticable. If you start a job in background, the check the logs, it may 
very well stop after the "748892 preprocessed word mangling rules" line 
you mention for a very long time. But if you issue a kill -HUP to that 
process, it will flush its log buffer and you'll see it is in fact working.

I use to do like this:

$ skill -HUP -c john ; sleep 3 ; mpiexec -np <nodes> ./john -status

...where the <nodes> figure must match the job's. After that, logfile 
buffers will be flushed too (if -HUP won't work, try -USR1)

I have considered flushing more often but I'm not sure how/when to do it 
without risking introducing performance hits. The current behaviour is 
as it comes from mpi10.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.