Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 08 Sep 2016 10:54:11 +0100
From: Darren Wise <darren@...ecorp.co.uk>
To: john-users-group <john-users@...ts.openwall.com>
Subject: Re: JtR, MPI and CUDA+CPU core usage?


    
Thank you very much for getting back to my question magnum, I will use OpenGL then rather then CUDA directly :)
Can I just confirm with you because it was a little unclear to me..
Using CUDA I cannot use CPU cores and GPU cores together unless a spawn multiple jobs..
Using OpenCL I can use both CPU cores and GPU cores, but per GPU card it will be with the addition of 1 extra process..
I.e: 48 CPU cores and 1 GPU card I would write -n 49 instead :)
Thank you very much magnum :D Lovely to meet you as well :D


> Kind regards,
> Darren Wise Esq, 
> B.Sc, HND, GNVQ, City & Guilds.


-------- Original message --------
From: magnum <john.magnum@...hmail.com> 
Date: 07/09/2016  20:45  (GMT+00:00) 
To: john-users@...ts.openwall.com 
Subject: Re: [john-users] JtR, MPI and CUDA+CPU core usage? 

On 2016-09-07 08:29, Darren Wise wrote:
> I've got a bit of a silly question here folks, nothing I have actually tried yet..
> I have an MPIEXEC install of JtR, 10 nodes (48 CPU cores) I literally have just plonked in my first CUDA card and not even powered it on yet to install the nVidia drivers.
> I am a little concerned, I will reinstall JtR on my MPIserver which uses Ubuntu 14.4LTS server install.. Of which I will include the flags for CUDA support...
> I know this sounds really really stupid, but when as I have done launch -n 48 (48 threads to run on 48 CPU cores) do I now have to spawn -n (number of CPU and total number of GPU cores)

Using CUDA, you'll not be able to use CPU's and GPU's in a single job. 
You should run one MPI process per GPU *card*. You can start another job 
using the remaining CPU cores on those machines plus all cores on 
machines that lack a GPU.

I recommend using OpenCL (even for nvidia), not CUDA. Our OpenCL formats 
are way ahead of the CUDA ones, in number and in quality.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.