Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150807073651.GA27469@openwall.com>
Date: Fri, 7 Aug 2015 10:36:51 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Benchmark result

Viktor,

You managed to start a new thread again. :-(  How are you doing that?
The Subject is preserved, but In-Reply-To is lost.  Is your mail client
broken, or do you, like, manually send messages to the list anew?
Please see how it appears in the archives:

http://www.openwall.com/lists/john-users/2015/08/07/1

Note that there's no "thread-prev" link here, indicating your message is
a start of thread, per its headers.

On Fri, Aug 07, 2015 at 01:53:14AM +0200, Viktor Gazdag wrote:
> New results with disabled (openmp, cuda) and enabled opencl from fresh
> github john.

Great.

> ./john --device=0,1,2,3,4,5,6,7 --format=LM-opencl
> --mask=[\x20\x21\x22\x23\x24\x25\x26\x27\x28]?a?a?a?a?a?a

You forgot to use --fork=8 here, so ran this on only one GPU.  Also, as
magnum reminded us, we can use "--device=gpu --fork=8" (if you know you
have 8 GPUs, and want to use them all).

Anyway, you also got poor speed for that one GPU for that hash type.
It looks like it's better to avoid NVIDIA GPUs (as well as AMD VLIW
GPUs) for DES-based hashes with our current code.  We're getting much
better speeds for LM and descrypt hashes on AMD GCN GPUs.  You can
probably find better use for your NVIDIAs, such as with
sha512crypt-opencl, for which we don't seem to be getting better speeds
on AMD's (unfortunately).

> ./john --test --format=descrypt; ./john --test --format=descrypt
> Benchmarking: descrypt, traditional crypt(3) [DES 128/128 AVX-16]... DONE
> Many salts:     3996K c/s real, 3996K c/s virtual
> Only one salt:  4335K c/s real, 4335K c/s virtual
> 
> Benchmarking: descrypt, traditional crypt(3) [DES 128/128 AVX-16]... DONE
> Many salts:     4617K c/s real, 4617K c/s virtual
> Only one salt:  4375K c/s real, 4375K c/s virtual

This confirms the clock frequency scaling theory.

> ./john --fork=8 --device=0,1,2,3,4,5,6,7 --format=sha512crypt-opencl

This is reasonable.

> ./john --fork=8 --device=0,1,2,3,4,5,6,7 --format=md5-opencl

What's md5-opencl?  It looks like you edited the command-line before
posting it in here.  We have md5crypt-opencl and raw-md5-opencl, but no
md5-opencl.

You ran both commands above with "batch mode", letting them go through a
wordlist and then to incremental mode with length switching.  For
formats like md5crypt-opencl and sha*crypt-opencl, there's a performance
boost if you lock the candidate passwords to a specific length, or if
you let incremental mode run long enough (not just two minutes) that its
length switches become infrequent.  From 2-minute runs without a length
lock, not much can be said about performance at these formats.

> ./john --fork=8 --device=0,1,2,3,4,5,6,7 --format=Raw-SHA256-opencl

This is a total waste of GPUs.  Don't do it.  For fast hashes (anything
starting with raw-*, some others), we generally don't have efficient
code on GPUs, with some recent exceptions (Sayantan has implemented mask
mode on GPU recently, for a handful of fast hash types, but not for all).

So if you want to crack fast hashes on GPU, please make sure your
specific hash type has mask mode on GPU and then do use mask mode (maybe
in combination with wordlist or incremental mode).

Thanks,

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.