Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 22 Jun 2015 22:43:11 +0300
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: bcrypt-opencl local vs. private memory

On Mon, Jun 22, 2015 at 09:20:51PM +0200, magnum wrote:
> Using private:
> Device 0: GeForce GTX TITAN X
> Local worksize (LWS) 8, Global worksize (GWS) 2048
> Benchmarking: bcrypt-opencl ("$2a$05", 32 iterations) [Blowfish 
> OpenCL]... DONE
> Speed for cost 1 (iteration count) of 32
> Raw:    790 c/s real, 787 c/s virtual
> 
> Using local:
> Device 0: GeForce GTX TITAN X
> Local worksize (LWS) 8, Global worksize (GWS) 4096
> Benchmarking: bcrypt-opencl ("$2a$05", 32 iterations) [Blowfish 
> OpenCL]... DONE
> Speed for cost 1 (iteration count) of 32
> Raw:    5354 c/s real, 5319 c/s virtual
> 
> BTW I tested oclHashcat too and it does 11570 c/s, we don't even do half 
> of that :-/

Apparently, hashcat (the CUDA one?) is even faster on TITAN X, giving
14440 c/s at stock clocks, 16890 c/s at +225 MHz o/c:

https://gist.github.com/epixoip?direction=desc&sort=updated
http://permalink.gmane.org/gmane.comp.security.phc/2988

Maybe it's one of those cases where we need a CUDA format.

As to local vs. private, I think we should also try using both at once,
especially on Kepler where they are similar speed.  Perhaps have
separate functions in the kernel, one with local and the other with
private, and direct even vs. odd candidates to them.

We previously considered this for local vs. global, but the optimal GWS
for these would be too different, so we'd need to run separate kernels.
We may in fact want to introduce a bcrypt-global-opencl, to be used
simultaneously with our main bcrypt-opencl (invoking two instances of
john manually, on different candidate password streams?) for a greater
cumulative speed.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.