Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3ac7828e072f79d9e5db1bf1ceb34f5a@smtp.hushmail.com>
Date: Tue, 22 Nov 2016 21:27:48 +0100
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: Loading high number of hashes on GPU

On 2016-11-22 19:31, magnum wrote:
> On 2016-11-22 11:57, Luis Rocha wrote:
>> On Mon, Nov 21, 2016 at 4:25 PM, magnum <john.magnum@...hmail.com> wrote:
>>> Luis, we've come to realise that NoLoaderDupeCheck is actually
>>> supposed to
>>> work fine. My guess is we have a bug in Sayantan's dupe removal code
>>> that
>>> only surfaces under certain conditions - but I haven't been able to
>>> reproduce your problem here.
>>>
>>> Could you please try setting NoLoaderDupeCheck back to Y and find a test
>>> case (as small as possible) that gets stuck in the infinite "trying next
>>> table size" loop and then post that input file somewhere (or mail it to
>>> me)? I'm assuming you are using some kind of test dataset that you can
>>> share.
>>>
>> Hi magnum, looks like that in my case it works with 11.5M hashes but with
>> 11.9M or higher it doesn't.
>> I uploaded the hashes here :
>> http://www.filedropper.com/elevendotninemillion
>
> Great! However that file seems broken. I tried several times with
> different browsers but end up with a file that's truncated after just
> ~605000 lines.

OK we solved the above problem using the contest server. Still, the file 
is not 11.9 million entries but 1.19 million and I still can't reproduce 
the problem with it. It loads, I crack stuff, it finishes. That's using 
latest & greatest bleeding-jumbo code as on github, and with 
NoLoaderDupeCheck=Y.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.