Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161114215813.GA17246@openwall.com>
Date: Mon, 14 Nov 2016 22:58:13 +0100
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Loading high number of hashes on GPU

On Mon, Nov 14, 2016 at 10:31:12PM +0100, magnum wrote:
> On 2016-11-14 21:48, Solar Designer wrote:
> >Not necessarily the same.  A potentially relevant limit is what clinfo
> >lists as "Max memory allocation", but it varies across GPUs (can be a
> >different fraction of total GPU memory) and might not apply to a given
> >JtR format.  (And we could work on making it not apply where it
> >currently does.)
> 
> Perhaps stating the obvious but please note the difference: the "Max 
> memory allocation" is a limit for any *single* allocation out of 
> possibly many. It seems it's a de-facto standard to set it to a quarter 
> of total memory (I haven't seen any GPU deviating from that). But you 
> can do several allocs (eg. a hash buffer, a key buffer and a result 
> buffer) for a total of up to 8 GB in Patrick's case and all of our 
> formats will/may do so. I would think that goes for Hashcat too. 
> However, if a *single buffer* (eg. our key buffer) would need more than 
> the max. alloc figure, we'd need some potentially complex workaround and 
> we don't do any such thing in current code.

You're right, it's almost always 1/4.  Sometimes the reporting changes
when there's already some memory allocated - e.g., I see 770 MB "Max
memory allocation" and 1250 MB "Global memory size" on a GPU in 7990
with almost 2 GB currently in use (so it appears to subtract the used
memory from the reported total, and adjust the max allocation in a
tricky way).

Yes, I meant workarounds like you describe, and also splitting the
individual buffers across multiple allocations if necessary (I guess
this is what you call potentially complex).

> >magnum - I also just noticed it does try to remove duplicates.  Hmm.
> >Anyway, that's not john-users material anymore (but is john-dev or
> >GitHub issue material).
> 
> It? Do you mean Sayantan's bt code or what?

Yes, I mean this in bt.c:

        num_loaded_hashes = remove_duplicates(num_ld_hashes, dupe_remove_ht_sz, verbosity);
        if (!num_loaded_hashes)
                bt_error("Failed to remove duplicates.");

Maybe there's a bug in implementation(s) of remove_duplicates() that we
need to find and fix.  Possibly they were effectively untested because
of our loader doing the same by default.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.