|
Message-ID: <CANWtx00_h8qB_Bm2cASnWV=KB89z5Vj+FAbcDbyB8+Ar8++sWQ@mail.gmail.com> Date: Mon, 4 Aug 2014 10:44:59 -0400 From: Rich Rumble <richrumble@...il.com> To: john-users@...ts.openwall.com Subject: Re: Splitting workload on multiple hosts On Mon, Aug 4, 2014 at 5:50 AM, magnum <john.magnum@...hmail.com> wrote: > On 2014-08-04 05:59, Rich Rumble wrote: > >> On Wed, Dec 11, 2013 at 6:59 PM, magnum <john.magnum@...hmail.com> wrote: >> >> From version 1.8 you can say "I'm node 427 out of 10000" using the >>> option >>> "--node=427/10000". If some of your nodes are much stronger than others >>> you >>> can tell them to do more work, eg. "--node=1-20/10000" will make this >>> node >>> do the first 20 splices. >>> >>> So for an MPI host, would you also use "--node=1-8/16" on one host, and >> "--node=9-16/16" on the other? Assuming they are nearly identical and have >> 8 cores to use. >> > > You would normally use MPI options and no --node option. Eg. "mpirun > -host=alpha,bravo -np 16 ./john (...)" for splitting the job in 16 > processes over two hosts (so 8 on each). > > However, if you want an MPI job to be part of a larger job (as in the > original example) you'd do something like "mpirun -host=alpha,bravo -np 16 > ./john -node=1-16/10000 (...)". > > Basically the syntax for MPI with --node is the same as for --fork with > --node. So these two examples are equivalent in terms of work space: > > ./john -fork=16 -node=1-16/10000 (...) > > mpirun -host=alpha,bravo -np 16 ./john -node=1-16/10000 (...) > Since Fork is no good for Windows and i currently want to dumbforce something. I don't have network connectivity to most of the hosts I'm pooling, so I may have to go and use live CD's and use Fork after all, but in linux. I am using node like the example, but I'm not sure it will work on external modes as well as fork would. -rich
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.