Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Sat, 09 Sep 2006 16:26:06 +1200
From: Russell Fulton - ISO <r.fulton@...kland.ac.nz>
To: john-users@...ts.openwall.com
Subject: need advice on setting up cracking system for large organisation

Hmmm... sorry about the length of the post but I've found in the past
that it is often better to give too much detail rather than not enough...

Hi Folks,
	 Spurred by the steady increase in brute force attacks against anything
that uses passwords (ssh, ftp, *sql etc.) and is exposed to the 'Net
I've decided to implement a service for our system admins to check
strength of locally held passwords.

I intend to set up a (hopefully ;) secure drop box where admins can use
scp to drop off password files to be cracked.  I will then farm the
cracking out to a bunch of 'servers' to do the actual cracking using
'idle' cycles on various big machines in the data centre.

I've spent a day or so reading up about John and doing some tests I've
been through most of the docs on the website (a good collection btw) and
I still have some questions over what the best way is to proceed.

One of the major issues is that we have *many* different hash formats
out there (probably one of everything that has been used in the last 10
years :).  My understanding is that John will crack only one hash type
at a time so I need to sort incoming files by type.

Is there a quick way of doing this?

My initial idea is to run a single mode crack on the file as soon as it
is received and this will find any obvious bloopers and tell me what
type of hashes the file contains.  Sensible?

[Yes, I'll probably want to add various contributed hashes to john
before the project goes live.]

I then intend to farm the password out to other (faster) machines to
crack based on word lists (yes, I've brought the CD :).  From my
understanding of how John works I'm guessing that the best strategy is
to first group the password files by hash type and process each group in
one go sending all files to each machine and splitting the wordlist
between machines.  Is this correct?

I have tried a run with john using the mangled list against 10 users in
a BSD MD5 hash format password file.  It took 15 hours on a fairly fast
Intel processor.  Any guestimates of how this will scale with number of
passwords?  Given the 16bit salt I'm guessing that for UNIX systems it
will be fairly linear with a slope of 1.  If this is the case should I
send one file to each machine with the full list -- this is certainly
easier.

OTOH I assume that window hashes which I understand to be unsalted
should scale with a slope of well under 1?

The initial aim of this exercise is to find the low hanging fruit that
might be broken by brute force attack so I have generated a shorter
mangled list with length limited to 6 characters. Is this the best
strategy to achieve this aim or should I try incremental for some set
time/length limit?

Longer term plans are to keep hitting the files until we are fairly sure
they conform to policy.  I also intend to diff the files as they are
resubmitted and test new passwords to the current level of the whole file.

Once this set up is up and running all systems that have password based
services exposed to the Internet will be required to submit files at
least once a month or when they change.  Ditto for all the systems in
the Data centre -- we have to do something useful with all those wasted
cpu cycles :)

Cheers and thanks,

Russell


-- 
Russell Fulton, Information Security Officer, The University of Auckland
New Zealand

-- 
To unsubscribe, e-mail john-users-unsubscribe@...ts.openwall.com and reply
to the automated confirmation request that will be sent to you.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.