Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7bc9339f076352488d6f930fd10975c6@smtp.hushmail.com>
Date: Fri, 3 May 2013 00:20:20 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: MPI vs. --fork

On 2 May, 2013, at 0:52 , magnum <john.magnum@...hmail.com> wrote:
> On 2 May, 2013, at 0:34 , Solar Designer <solar@...nwall.com> wrote:
>>> mpirun -np 4 ./john -node=5-8/12 ...
>> 
>> The "mpirun -np 4 ./john -node=5-8/12 ..." syntax implies that if you
>> omit mpirun in an MPI-enabled build and run with a --node option like
>> this, it will work differently from a non-MPI build - namely, it will
>> run like --node=5/12 instead of like --node=5-8/12.  I think this is bad.
> 
> Does it imply that? I don't think so, especially since I'm planning to rely on core code as far as possible. "./john -fork=4" and "mpirun -np 4 ./john" should practically do the same thing, except the latter may run on up to four different hosts.
> 
>> When --node is used with a range yet --fork is not specified, core and
>> bleeding (when MPI is not enabled) will now do the work of those
>> multiple nodes (within the range) inside one process.  This is useful
>> when the nodes are of non-equal speed - e.g., an OpenMP-enabled build
>> running on a quad-core may use 4 node numbers without --fork, and
>> another one running on a dual-core as part of the same distributed
>> attack may use 2 node numbers also without --fork, or either/both may
>> use these node numbers with --fork (the invocation is the same except
>> for the optional use of --fork).
> 
> I need to think it over and test it in reality. But again, I'm thinking that "mpirun -np 4" should behave exactly the same as "-fork=4" in regards to things like this.

I think my plan holds. Here's where I'm at now:

Both these will start 1-4/4
  ./john -fork=4 ...
  mpirun -np 4 ./john ...

All these will start 5-8/12:
  ./john -fork=4 -node=5/12 ...
  ./john -fork=4 -node=5-8/12 ...
  mpirun -np 4 ./john -node=5/12 ...
  mpirun -np 4 ./john -node=5-8/12 ...

All these will refuse to run:
  ./john -node=2
  ./john -fork=4 -node=2
  mpirun -np 4 ./john -node=2

This will start node 7/12, MPI build or not:
  ./john -node=7/12 ...

This will start node 7/12 on a remote node:
  mpirun -host hostname -np 1 ./john -node=7/12 ...

This is rejected - you can't use -fork and mpirun [with -np > 1] at the same time:
  mpirun -np 4 ./john -fork=...

This is somewhat more advanced, it will start 1-4/4 on one remote node:
  mpirun -host hostname -np 1 ./john -fork=4 ...

There's no special code for the last example, it's just how it ends up. And I think it's logical.

I think this is the way to go, don't you? This behaviour means least possible MPI code, and most possible similarity with options.fork code. Actually I have tried piggy-backing the options.fork variable for MPI with good results but in the end I will add MPI-specific clones of most options.fork clauses, like this:


        else if (options.fork &&
            options.node_max - options.node_min + 1 != options.fork)
                msg = "range must be consistent with --fork number";
+#ifdef HAVE_MPI
+       else if (mpi_p > 1 &&
+           options.node_max - options.node_min + 1 != mpi_p)
+               msg = "range must be consistent with MPI node count";
+#endif
        else if (!options.fork &&


Resuming MPI is a little tricky. I think I'll have to "emulate" fork: first all nodes will read the unnumbered rec file (with lock=0, right?), and then re-read the correct one at the same point a -fork session would have actually forked and be doing that.

Currently it's not finished and MPI session save/resume is busted. A warning is printed about that if applicable. I am not aware of any problems with non-MPI builds of bleeding though: All half-baked code is #ifdef HAVE_MPI.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.