Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F8FF7E4.8040906@banquise.net>
Date: Thu, 19 Apr 2012 13:32:52 +0200
From: Simon Marechal <simon@...quise.net>
To: john-users@...ts.openwall.com
Subject: Re: automation equipped working place of hash cracker,
 proposal

On 19/04/2012 12:53, Aleksey Cherepanov wrote:
> I think the most effective compression for candidates is to make john.ocnf,
> john.rec and some kind of value for amount to stop after. So we run john
> --stdout on the server, write down all information we need to produce
> appropriate .rec files and then distribute files to the nodes. Or even without
> --stdout: we just produce needed .rec files. I do not know what is exactly
> stored in .rec file so I do not know how easy what I say about is. But it
> seems to be real, does not it?

.rec stores jtr state when it stopped, so that it can be resumed. I
believe you might only need this with incremental mode, as single and
wordlist modes (with a reasonable quantity of rules) are quick enough to
be considered a single "job", and Markov mode was made to be
distributed. Wordlist modes could be effectively distributed by just
sending a few rules to each client.

The problem of generating the right .rec file without resorting to the
trick you mention (breaking after a certain quantity of passwords and
keeping the corresponding .rec) is probably not trivial. However going
with this trick will imply high CPU usage on the server, and finding a
way to stop the clients after they processed their share of the work.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.