Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANJ2NMNpdcnn2Qz5hk33eLp_PBMC9eT4gs8Sw2J07fUg+gJFOQ@mail.gmail.com>
Date: Wed, 4 Apr 2012 00:34:06 +0800
From: myrice <qqlddg@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: fast hashes on GPU

On Wed, Apr 4, 2012 at 12:09 AM, Solar Designer <solar@...nwall.com> wrote:

>
> You may revise the condition to be keys_changed && saw_same_salt_again.
> (keys_changed is set in set_key() as we discussed before.)  This
> condition will be true only after bench_set_keys() has been called for a
> second time, so by that point you'll have the correct number of salts
> recorded (including duplicate salts, which are present in benchmarking
> only).
>
>  I already thought of this. The fmt_self_test also calls set_key. It will
disturb the condition you said. I am thinking that for testing purpose, we
could set up a flag in bench_set_keys() or comment out fmt_self_test in
benchmark_format.

Instead, you may focus on offloading hash comparisons to GPU.
>
> Yes, I already write cmp_all on cuda. However, the c/s almost the same.
>From cuda profiler, I see cuda_cmp_all(My new function, already push to
github) only occupied 0.5% time.


> You could try THREADS 512 there.

Here is new result with THREADS 512

Benchmarking: Mac OS X 10.7+ salted SHA-512 CUDA []... DONE
Many salts: 34779K c/s real, 34779K c/s virtual
Only one salt: 19532K c/s real, 19532K c/s virtual

Thanks!
Dongdong Li

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.