|
Message-ID: <20120604081107.GA12668@openwall.com> Date: Mon, 4 Jun 2012 12:11:07 +0400 From: Solar Designer <solar@...nwall.com> To: john-dev@...ts.openwall.com Subject: Re: Password Generation on GPU myrice - On Mon, Jun 04, 2012 at 12:25:17AM +0800, myrice wrote: > On Mon, Apr 30, 2012 at 12:32 PM, Solar Designer <solar@...nwall.com> wrote: > > void set_mask(int count, int *positions, char **charsets) [...] > At begin I want to use each threads generates candidate passwords. What do you mean by this? Multiple threads on CPU, each generating candidate passwords? No, that's not what we'll have if we're talking of a GPU-enabled format. The candidate password generation would be on GPU, and we'd use at most one logical CPU per GPU. That said, for CPU-only formats, the set_mask() idea would in fact be usable to generate candidate passwords by multiple threads - right from inside of crypt_all(). > I guess with set_mask(), we can reuse current password generation > mode(e.g. incremental, wordlist or single). Not quite. Initially, only the new mask mode (to be added) would take advantage of set_mask(). Later, we can add some support for combining this with other cracking modes (I posted some thoughts on that already, but it's a relatively distant future right now). > Here, I have some questions regarding the set_mask() > > 1) I think the set_mask() should be implemented on GPU, at least, we > should generate password candidates and directly place them on GPU > memory to avoid data transfer. Otherwise, I do not know benefits of > this - the only we have done is generate more password candidates? set_mask() itself is merely a wrapper that sets variables. It may also do some preprocessing if such is needed, but that's all. In fact, it might be called just once, outside of the cracking loop. (Or it might also be called in the cracking loop by another cracking mode.) The actual implementation will be inside crypt_all(), which after a set_mask() will know to generate and hash many more candidate passwords than it had keys set with set_key(). Yes, the candidates will be generated on GPU. In fact, due to simplicity of this special cracking sub-mode (so to speak), you should not even need to store them in the GPU card's global memory. You'll be generating and hashing them right away. And you don't even need to store them anywhere. If there's a successful guess, get_key() should be able to reconstruct the candidate password with a given number (this may be implemented on CPU). > 2) With the set_mask() on GPU, we should pass the argument to GPU and > before the crack, we have to generate password candidates first, > right? Not quite. See answer above. set_mask()'s arguments will in fact need to be passed to GPU at some point before or at the crypt_all() call. Then crypt_all() will proceed to generate/hash/forget the candidates. So you don't need to generate any of them "first". > 3) We still have to call multiple time of set_key() Maybe, but not necessarily. This depends on cracking mode and its implementation. For mask mode, we may choose to call set_key() just once per crypt_all() - having the pre-set mask do everything else. > So in my understanding, we have > - Less set_key() call > - Less data size transfer from CPU to GPU > This reduce the CPU execution time. > > But we take more time at GPU with > + Generate password on GPU Yes. > The total password candidates number in one crypt_all() call should be > the same. Yes, it should be similar to what we'd use now. Not necessarily exactly the same. Some tuning will be needed. > We have to deal with memory on GPU. Only to store the computed hashes. Or maybe not even for that purpose, if we move hash comparisons onto GPU and inside crypt_all() as well, which is a closely related change. > I hope more details of this implementation. Did I provide sufficient detail above? Any other questions? Thanks, Alexander
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.