Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ddff7868fe32a3b19a7bad1aa322bebe@smtp.hushmail.com>
Date: Thu, 11 Jul 2013 11:27:11 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: Jobs on GPUs

On 11 Jul, 2013, at 9:51 , marcus.desto <marcus.desto@...pl> wrote:
> Hello everybody!
> 
> Using OpenCL on GPUs, how many parallel threads can be run on a single GPU-device?
> 
> - Doesn't it depend on the device? - If so, how to find out?
> 
> Regards,
> Marcus


Assuming we're talking GPGPU, a good implementation with tuned workgroup size and so on will use most of the GPU and running two instances should give a net loss of performance. IRL you might see a net gain from running two or a few more instances. I think our various pbkdf2-hmac-sha1 kernels are good enough you will see a net loss if you try it.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.