Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+E3k91uRuG3=jm+UaZzhdFq=m0+4moAofdR4NHPbK9+zbzAEw@mail.gmail.com>
Date: Thu, 19 Feb 2015 06:48:20 -0900
From: Royce Williams <royce@...ho.org>
To: john-dev <john-dev@...ts.openwall.com>
Subject: Re: descrypt speed

On Wed, Feb 18, 2015 at 10:30 PM, Royce Williams <royce@...ho.org> wrote:
> On Wed, Feb 18, 2015 at 10:08 PM, Sayantan Datta <std2048@...il.com> wrote:
>>
>> On Thu, Feb 19, 2015 at 11:59 AM, Sayantan Datta <std2048@...il.com> wrote:
>>>
>>> On Mon, Nov 3, 2014 at 3:32 AM, Royce Williams <royce@...ho.org> wrote:
>>>
>>> Hi Royce, magnum,
>>>
>>> If you are interested, you can test the new revision of descrypt-opencl on
>>> 970, 980 and 290X. There are three kernels and you can select them by
>>> changing the parameters HARDCODE_SALT and FULL_UNROLL in
>>> opencl_DES_hst_dev_shared.h. Setting (1,1) gives you the fastest kernel but
>>> takes very long to compile, however subsequent runs should compile much
>>> quicker as pre-compiled kernels(saved to the disk from the prior runs) are
>>> used. Setting (1,0) gives slower speed but faster compilation time. Setting
>>> (0,0) is the slowest but compilation is quickest. Also do not fork on same
>>> system when HARDCODE_SALT is 1.
>>>
>>> Regards,
>>> Sayantan
>>
>> Actually, fork may be used with HARDCODE_SALT =1 but at most 2 threads,
>> anything more than that is wasteful and you may need ton of RAM. Even with
>> --fork == 2, I think you should have at least 8GB RAM. Another problem we
>> currently have when using fork is that kernels are compiled n times for n
>> threads which is unnecessary. However we can trick that by using --fork=1 to
>> compile all kernels and then restart using --fork=2.
>>
>> Some performance Numbers using --fork = 2, HARCODE_SALT=1, FULL_UNROLL=1,
>> 124 passwords and 122 salts, GPU: 7970(925Mhz core, 1375Mhz memory)
>>
>> 2 0g 0:00:05:07  3/3 0g/s 749774p/s 91400Kc/s 92900KC/s GPU:61°C util:97%
>> fan:27% scprugas..myremy26
>> 1 0g 0:00:05:07  3/3 0g/s 749756p/s 91398Kc/s 92898KC/s GPU:61°C util:97%
>> fan:27% 339gmh..8jfu44
>>
>> Performance with --fork=1
>> 0g 0:00:04:25  3/3 0g/s 1324Kp/s 161247Kc/s 163891KC/s GPU:60°C util:87%
>> fan:27% srusuu..07pvjy
>
> Thanks for the opportunity to test!
>
> Here are my results of "--test --format=descrypt-opencl" for a GTX 970
> SC (factory overclocked to 1316 MHz):
>
> First, a baseline - performance using magnumripper from a couple of months ago:
>
> Many salts:     46137K c/s real, 45680K c/s virtual
> Only one salt:  25700K c/s real, 25700K c/s virtual
>
>
> Using fb0b9383d6 magnumripper from today, for
> (HARDCODE_SALT,FULL_UNROLL) values:
>
> (0,0)
>
> Many salts:     77345K c/s real, 77345K c/s virtual
> Only one salt:  35298K c/s real, 35298K c/s virtual
>
> (1,0)
>
> Many salts:     77864K c/s real, 78643K c/s virtual
> Only one salt:  34952K c/s real, 34952K c/s virtual
>
> (1,1)
>
> Many salts:     169869K c/s real, 169869K c/s virtual
> Only one salt:  47710K c/s real, 48192K c/s virtual

And here are some (1,1) single-GPU result against the later commit (65fd39cee8):

Many salts:     171966K c/s real, 171966K c/s virtual
Only one salt:  40489K c/s real, 40894K c/s virtual

Many salts:     170393K c/s real, 170393K c/s virtual
Only one salt:  41008K c/s real, 41008K c/s virtual

Many salts:     170917K c/s real, 169225K c/s virtual
Only one salt:  40894K c/s real, 40894K c/s virtual

Many salts:     174935K c/s real, 178469K c/s virtual
Only one salt:  41008K c/s real, 40606K c/s virtual

Many salts:     171966K c/s real, 170263K c/s virtual
Only one salt:  40489K c/s real, 40894K c/s virtual

Many salts up slightly (but within normal variation?), and one-salt
performance is down ~18% from previous - but still much better than
before this work. :-)

Royce

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.