Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a321760ed23d318513b7665ca7d1ecb@smtp.hushmail.com>
Date: Tue, 11 Sep 2012 00:15:54 +0200
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: MPI with incremental mode working fine?

On 2012-09-11 00:06, Solar Designer wrote:
> BTW, can we possibly introduce a check for running under mpiexec into
> non-MPI builds of jumbo?  Is there some env var or the like that could
> be checked without a dependency on MPI?  If so, with this change jumbo
> would complain when built without MPI, but run through mpiexec.

Not a bad idea. I'll check it out. Though normally all nodes but one
will fail to lock the one .rec file, and complain loudly. When actually
running acress multiple hosts, it depends on your setup though.

>> The output of "mpiexec -np 8 ../run/john -status" might give a clue. For
>> my test session above, it looks like this:
>>
>> $ mpiexec -np 8 ../run/john -status:test
> 
> Another way to get status from a running MPI session is "killall -HUP john".
> Your approach with --status works for stopped sessions as well, though.

Yes, but that only works when running MPI on one multi-core host, not
when you actually run across several hosts. The canonical way to
accomplish the same is "killall -USR1 mpiexec". (a -USR1 will be relayed
to the johns and treated by them just like -HUP, while a -HUP to mpiexec
would cause it to die).

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.