Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b43690b94ef28518b2f861f660381ab@smtp.hushmail.com>
Date: Sun, 12 Jan 2014 16:08:42 +0100
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: CUDA multi-device support

On 2014-01-12 14:16, Jeremi Gosney wrote:
> On 1/12/2014 1:25 AM, Muhammad Junaid Muzammil wrote:
>> Currently we have set MAX_GPU limit as 8 in both openCL and CUDA
>> variants. What was the reason behind it? Currently, both AMD crossfire
>> and NVIDIA SLI supports a maximum of 4 GPU devices.
>
> This is not very sound logic, as one does not use Crossfire nor SLI for
> GPGPU. In fact, this technology usually must be disabled for compute
> work. fglrx supports a maximum of 8 devices, and afaik nvidia supports
> 16 devices, if not more. So 16 would likely be a more sane value.
>

Right. And that's local ones. With VCL/SnuCL/DistCL you can have a lot 
more so oclHashcat supports 128 devices.

I intend to add a file ```common-gpu.[hc]``` for shared stuff between 
CUDA and OpenCL, eg. temperature monitoring. When I do that I will merge 
MAX_CUDA_DEVICES and MAX_OPENCL_DEVICES to one same MAX_GPU_DEVICES so 
they'll always be the same. And I'll probably set it as 128.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.