Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EFDD38B.7030604@hushmail.com>
Date: Fri, 30 Dec 2011 16:06:51 +0100
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: SSE/intrinsics for sapB/sapG [was: Re: JtR CUDA ????]

On 10/12/2011 09:10 AM, magnum wrote:
> On 2011-10-11 21:16, Marsman007 wrote:
>>>> Or plans to have sapb/sapg in CUDA??
>>>
>>> BTW, both these formats still use generic (non-SSE) MD5 and SHA-1. I
>>> believe they can be made a lot faster without "resorting" to GPU.
>>>
>> Not sure what to do with you 'BTW' remark.
>> Are you saying that sapb is just a salted MD5?
>
> There is some proprietry "magic" as well as salting but that part
> doesn't look slow. Other than that, sapG is 2xSHA1 and sapB is 2xMD5. I
> presume it would be a walk in the park (for guys like JimF or Bartavelle
> if they had time/incentive) to utilise the existing intrinsics or asm
> funtions, for a decent boost. And this could/should still be combined
> with OMP.

I added SSE/MMX/intrinsics to sapB, including with OMP. It's at least 
2.5x faster now, near 10000K c/s on my old dual-core laptop.

The patch should be in next Jumbo but is currently only available from 
github:
https://github.com/magnumripper/magnum-jumbo

Will do sapG in the next few days.

Note that I only have the built-in self-tests - these formats are not 
included in our Test Suite. Please test, and report any problems (or 
lack thereof).

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.