Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 21 Nov 2016 16:25:25 +0100
From: magnum <>
Subject: Re: Loading high number of hashes on GPU

> On 2016-11-14 16:19, magnum wrote:
> > On 2016-11-14 15:03, Luis Rocha wrote:
>>> Not sure if I'm doing something wrong but having hard time to load high
>>> number of hashes on GPU for raw-sha1.
>>> I have 20M sha1 hashes that I'm trying to load. The GPU has 4Gb RAM.
>>> $./john
>>> John the Ripper 1.8.0-jumbo-1-5344-gefae4e5+ OMP [linux-gnu 64-bit
>>> AVX2-ac]
>>> $./john 20M.hashes --wordlist=uniqwords  --pot=20M.pot
>>> --format=raw-sha1-opencl --session=gpu  --fork=2 --rules:Jumbo
>>> No dupe-checking performed when loading hashes.
>> I think the above message is a clue. Did you set NoLoaderDupeCheck in
>> john.conf? That wont work with Sayantan's bitmap tables.
On 2016-11-14 16:34, Luis Rocha wrote:
> Yep had NoLoaderDupeCheck=Y in john.conf. It's working now.
> Thank you magnum!

Luis, we've come to realise that NoLoaderDupeCheck is actually supposed 
to work fine. My guess is we have a bug in Sayantan's dupe removal code 
that only surfaces under certain conditions - but I haven't been able to 
reproduce your problem here.

Could you please try setting NoLoaderDupeCheck back to Y and find a test 
case (as small as possible) that gets stuck in the infinite "trying next 
table size" loop and then post that input file somewhere (or mail it to 
me)? I'm assuming you are using some kind of test dataset that you can 


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.