Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44452.128.173.192.90.1334260936.squirrel@webmail.tuffmail.net>
Date: Thu, 12 Apr 2012 16:02:16 -0400 (EDT)
From: "Brad Tilley" <brad@...ystems.com>
To: john-users@...ts.openwall.com
Subject: Re: JTR against 135 millions MD5 hashes

> How much RAM do you have?

Speaking for myself... I have 4G.

> How much does the "john" process occupy?

It consumes all the RAM and starts swapping using pretend RAM ;) when I
try to load all of those raw-md5 hashes at once. Here's the file size:

ls -alh ../../hashes/raw-md5.hashes.txt
-rw-r--r-- 1 rbt rbt 4.3G

> What version of JtR is that?

I just tested this with john-1.7.9-jumbo-5

<snip>

> Alternatively, try the
> --save-memory=2 option or/and split the file into a few chunks (e.g.,
> maybe halves of that large file will fit in your RAM one at a time).

--save-memory=2 did not impact the RAM usage in my testing. The file seems
to be so large that john is swapping. Not really a surprise. This split
will break the large file into 14 smaller files that are roughly 315M each
(10 million hashes in each file):

split -l 10000000 -d raw-md5.hashes.txt

john will crack these smaller files without swapping and only use about
1.3G of RAM. I tested it with 16Crack too (same issue... it would swap
when loading that many hashes). The reasonable thing to do with this many
hashes is to either get more ram or break the hashes up into sizes that
fit into your RAM.

Just my opinion. Hope it helps.

Brad



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.