Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120910220607.GB20015@openwall.com>
Date: Tue, 11 Sep 2012 02:06:07 +0400
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Cc: "Thireus (thireus.com)" <contact@...reus.com>
Subject: Re: MPI with incremental mode working fine?

On Mon, Sep 10, 2012 at 11:40:13PM +0200, magnum wrote:
> The most common mistake people do is they actually do not run an MPI
> build when they think they do, but then all cracked passwords should be
> duped by all nodes, and this does not seem to be the case here. Unless
> *some* of your nodes are running a non-mpi binary (provided you actually
> run mpi across several different machines).

BTW, can we possibly introduce a check for running under mpiexec into
non-MPI builds of jumbo?  Is there some env var or the like that could
be checked without a dependency on MPI?  If so, with this change jumbo
would complain when built without MPI, but run through mpiexec.

> The output of "mpiexec -np 8 ../run/john -status" might give a clue. For
> my test session above, it looks like this:
> 
> $ mpiexec -np 8 ../run/john -status:test

Another way to get status from a running MPI session is "killall -HUP john".
Your approach with --status works for stopped sessions as well, though.

> BTW you could change "Save = 600" in john.conf to something less, I use 60.

Right.  BTW, "killall -HUP john" updates the .pot and .log files, too.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.