Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 7 Mar 2007 12:47:27 +0100
From: Pablo Fernandez <pfernandez@....org>
To: john-users@...ts.openwall.com
Subject: Re: Splitting the Charset into chunks

Hi again,

One of the reasons why I don't want to touch the code is for maintenance 
reasons. If I create an script able to run John in a GRID rather than 
touching some lines here and there in the .c files, it will be probably more 
used (and published) in the future. 

So, if the charset is not to be divided into millions of finite jobs, then 
I'll go for a discrete number of infinite jobs. I've seen an External rule to 
parallelize already built in the examples (External:Parallel), but it seems to 
me rather inefficient: it basically calculates if the word to be checked is 
for him, and if it's not skip it, so if we have 100 nodes, each node checks 
one word and skips 99.

I've tested how ineficient it is, and it came out to be quite ok. For 1000 
nodes there's a 25% loss on every node, for 250 there's an 8%. I've tried 
with more numbers, and came out to this relation: 0.030% loss in every node 
for each node, 0.025% above 1000 nodes, and 0.008% above 10000 nodes probably 
thanks to the cache :)

Those seem to be good numbers, but they're not in a very big cluster. If 
you've got 10000 machines you loose 80% of the power in every machine looking 
for the next word to try. So, I think I'll try to write code for a small 
cluster (let's say 300 nodes with 10% loss... after all, having access to 10k 
machines is not that easy xD, but still 10% is big) unless anybody knows a 
better way to split the charset.

I've tried to look deeper into the charset.c code but it seems to me too 
complicated, and again, I don't want to modify the source files in John. Does 
anybody know a better way to skip a given number of words in a row from an 
external filter? I guess it's impossible so fas as I understand it.

Solar, you spoke about a way to split a charset that's not skipping some 
order[] entries in inc.c, can you tell me about it?

I think that was all I wanted to say, sorry for the big email.

Thanks, and best regards,
Pablo



On Tuesday 27 February 2007 15:39, Solar Designer wrote:
> On Mon, Feb 26, 2007 at 09:32:08PM +0100, Pablo Fernandez wrote:
> > I'm trying to find a way to split any charset into a given (very high)
> > number of disjoint chunks and then create a new charset file with each
> > chunk.
>
> While there's a way to split a charset file like that, exactly the same
> effect is better achieved by having the code in inc.c skip some order[]
> entries.  The number of chunks won't be very high, though - there will
> be just a few hundred of reasonably large ones (with order[] indices of
> around 2000 for the default CHARSET_* parameters).
>
> There's no better way to split the charset files, assuming that the code
> in JtR remains unchanged; even if you would drop some characters from
> some character positions, those characters would be re-added with
> inc.c: expand(), so your different nodes would eventually be trying the
> same candidate passwords and they would do so in a less optimal order.
>
> > Maybe the code used to create the default ones may help? Is it available?
>
> Yes, the code is available - it's just the --make-charset option to JtR
> and the corresponding code is found in charset.c.
>
> > Also I found it very difficult to understand the insights of a charset
> > file, I have seen no documentation about it in the code. Is there any?
>
> It's just the comments in charset.h.
>
> --
> Alexander Peslyak <solar at openwall.com>
> GPG key ID: 5B341F15  fp: B3FB 63F4 D7A3 BCCC 6F6E  FC55 A2FC 027C 5B34
> 1F15 http://www.openwall.com - bringing security into open computing
> environments
>
> Was I helpful?  Please give your feedback here:
> http://rate.affero.net/solar

-- 
To unsubscribe, e-mail john-users-unsubscribe@...ts.openwall.com and reply
to the automated confirmation request that will be sent to you.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.