Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFs9wnXcNXkvPpuoVVWnZu9YSgA5nxDA33xmajuVFY3X3qDUCA@mail.gmail.com>
Date: Wed, 10 Mar 2021 20:25:07 +0100
From: MichaƂ Majchrowicz <sectroyer@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: Multi-gpu setup

> Not really, but as a hack you can list your faster GPU multiple times,
> like this:
>
> ./john -fork=3 -dev=1,1,2 -format=something-opencl hash

In my case it wouldn't be so easy as the difference between those two
gpus was something around 13 to 15 and don't want to create so many
forks there :)

> This will run two concurrent instances of the OpenCL kernel on device 1,
> but only one instance on device 2, still splitting the work between the
> three instances equally - thus, giving device 1 twice more work.
>
> As a cleaner workaround, you can run separate instances with "--node":
>
> ./john -se=1 -dev=1 -node=1-2/3 -format=something-opencl hash
>
> and concurrently:
>
> ./john -se=2 -dev=2 -node=3/3 -format=something-opencl hash

This looks interesting. I already switched to separate dictionaries
for every gpu on each node for now. Tough with your syntax I noticed
you do NOT use --fork so does that mean with -se it will not require
-node=1-15/28 to run 15 forks for single gpu ? :)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.