Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 14 Nov 2016 22:31:12 +0100
From: magnum <>
Subject: Re: Loading high number of hashes on GPU

On 2016-11-14 21:48, Solar Designer wrote:
> On Mon, Nov 14, 2016 at 06:18:25PM +0100, Patrick Proniewski wrote:
>> Speaking about memory limit on GPU, is there any ?
> It varies by format, specific GPU, etc.
>> I've seen benches of hashcat on badass GPU using only 2GB of the 8GB VRAM available. I have not yet had the opportunity to use JtR on GPU and I wonder if the limit is the same.
> Not necessarily the same.  A potentially relevant limit is what clinfo
> lists as "Max memory allocation", but it varies across GPUs (can be a
> different fraction of total GPU memory) and might not apply to a given
> JtR format.  (And we could work on making it not apply where it
> currently does.)

Perhaps stating the obvious but please note the difference: the "Max 
memory allocation" is a limit for any *single* allocation out of 
possibly many. It seems it's a de-facto standard to set it to a quarter 
of total memory (I haven't seen any GPU deviating from that). But you 
can do several allocs (eg. a hash buffer, a key buffer and a result 
buffer) for a total of up to 8 GB in Patrick's case and all of our 
formats will/may do so. I would think that goes for Hashcat too. 
However, if a *single buffer* (eg. our key buffer) would need more than 
the max. alloc figure, we'd need some potentially complex workaround and 
we don't do any such thing in current code.

> magnum - I also just noticed it does try to remove duplicates.  Hmm.
> Anyway, that's not john-users material anymore (but is john-dev or
> GitHub issue material).

It? Do you mean Sayantan's bt code or what?


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.