Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c1ac60010d8293dab4867945d70386ab@smtp.hushmail.com>
Date: Wed, 7 Sep 2016 21:45:45 +0200
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: JtR, MPI and CUDA+CPU core usage?

On 2016-09-07 08:29, Darren Wise wrote:
> I've got a bit of a silly question here folks, nothing I have actually tried yet..
> I have an MPIEXEC install of JtR, 10 nodes (48 CPU cores) I literally have just plonked in my first CUDA card and not even powered it on yet to install the nVidia drivers.
> I am a little concerned, I will reinstall JtR on my MPIserver which uses Ubuntu 14.4LTS server install.. Of which I will include the flags for CUDA support...
> I know this sounds really really stupid, but when as I have done launch -n 48 (48 threads to run on 48 CPU cores) do I now have to spawn -n (number of CPU and total number of GPU cores)

Using CUDA, you'll not be able to use CPU's and GPU's in a single job. 
You should run one MPI process per GPU *card*. You can start another job 
using the remaining CPU cores on those machines plus all cores on 
machines that lack a GPU.

I recommend using OpenCL (even for nvidia), not CUDA. Our OpenCL formats 
are way ahead of the CUDA ones, in number and in quality.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.