Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120419105313.GA18039@debian>
Date: Thu, 19 Apr 2012 14:53:13 +0400
From: Aleksey Cherepanov <aleksey.4erepanov@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: automation equipped working place of hash cracker,
 proposal

On Thu, Apr 19, 2012 at 09:53:41AM +0200, Simon Marechal wrote:
> On 19/04/2012 02:03, Aleksey Cherepanov wrote:
> > On Wed, Apr 18, 2012 at 11:35:23PM +0200, Frank Dittrich wrote:
> >> > On 04/18/2012 10:27 PM, Aleksey Cherepanov wrote:
> >>> > > On Mon, Apr 16, 2012 at 10:52:30AM +0200, Simon Marechal wrote:
> >>>> > >> If I was to design this, I would do it that way :
> >>>> > >> * the server converts high level demands into low level job units
> >>>> > >> * the server has at least a network API, and possibly a web interface
> >>>> > >> * the server handles dispatching
> >>> > > 
> >>> > > I think the easiest way to split cracking task into parts for distribution is
> >>> > > to split candidates list, to granulate it: we run our underlying attack
> >>> > > command with '--stdout', split it into some packs and distribute that packs to
> >>> > > nodes that will just use packs as wordlists. Pros: it is easy to implement, it
> >>> > > is flexible and upgradabl, it supports modes that we don't want run to the end
> >>> > > like incremental mode, all attacks could paralleled as such (if I am not
> >>> > > wrong). Cons: it seems to be suboptimal, it does not scale well (candidates
> >>> > > generation could become bottleneck, though it could distributed too),
> >> > 
> >> > I'm afraid network bandwidth will soon become a bottleneck, especially
> >> > for fast saltless hashes.
> > If we take bigger packs of candidates then they could be compressed well. So
> > we trade off network with cpu time.
> 
> With N clients, you will need to generate, compress and send your
> candidates N times faster than the average client can crack. During the
> contest you might have way more than 500 cores on your hands.
> 
> Even for a moderately slow use case this will not work : for a 10 minute
> single salt md5-crypt job, it will require 10213800 passwords on my
> computer (for a single core, not 8xOMP). Generating the passwords, and
> compressing them with gzip takes 5.67s. It produces a 25MB file. With
> lzma it 36.05s and produces 5.7MB file. With 8 cores on the master, you
> will be able to server 850 cores with gzip, 133 with lzma. You will also
> need to upload at 35MB/s with gzip and 1.26MB/s with lzma.

I think the most effective compression for candidates is to make john.ocnf,
john.rec and some kind of value for amount to stop after. So we run john
--stdout on the server, write down all information we need to produce
appropriate .rec files and then distribute files to the nodes. Or even without
--stdout: we just produce needed .rec files. I do not know what is exactly
stored in .rec file so I do not know how easy what I say about is. But it
seems to be real, does not it?

Regards,
Aleksey Cherepanov

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.