Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3dd19fe5958caee522200face652b61@smtp.hushmail.com>
Date: Mon, 30 Dec 2013 02:36:53 +0100
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: CUDA multi-device support (was: John-dev over GPUs)

On 2013-12-29 16:52, Muhammad Junaid Muzammil wrote:
> Hello,
>
> I have added multi CUDA device option parsing

That sounds excellent! Is your code available somewhere yet? Or could 
you post a patch? I prefer pull requests on GitHub but a patch posted 
here is fine too, especially if it's not your final cut but more of a RFC.

> to option.c and cu_common.h.

We might eventually want to move some of it from options.c to 
common-cuda.c for various reasons but that is not a problem. Please post 
your current code for review. Don't forget licensing:
http://openwall.info/wiki/john/licensing

> Currently, I am parsing at most 2 CUDA devices.

For OpenCL we have a macro MAXGPUS in common-opencl.h, I think it would 
be nice if you do something similar - in case you didn't already - eg. 
define MAX_CUDA_DEVICES in cuda_common.h. Perhaps we should default to 8 
like OpenCL does.

> I haven't written the test cases for this functionality yet. I couldn't
> find any documentation pertaining to unit tests. Is there any such
> documentation?

For the Test Suite? Unfortunately there's not much docs, if any. And the 
TS mostly just tests formats right now, not things like modes, node/fork 
or multiple GPUs. It would be nice to add stuff like that but there's a 
lot of planning and decisions needed.

However, there is a pass-through option in TS which just passes options 
on to john, so you can run a test like this:

cd test
./jtrts.pl pwsafe-cuda -passthru="-dev=0,1"

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.