Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Wed, 1 Feb 2017 21:25:23 +0100
From: Patrick Proniewski <p+password@...atpro.net>
To: john-users@...ts.openwall.com
Subject: to Single or not to Single

Hello,

I'm looking for the best way to optimize a specific cracking session.
I've got a dump of more than 45 millions hashes+salts (dynamic_25), and for each one I've got a very serious candidate that works directly for more than 50%, and works with simple modification rest of the time.

After some thinking, I've decided to try and use mode Single. I build a password file looking like this:

candidate:$dynamic_25$hash$salt

My questions is: does it look like the good solution to you?

It appears that RAM consumption is horrendous. Even with a split file, I can saturate very rapidly 30 GB RAM. I've split to smaller pieces and refrained from using --fork. 
A 4.7 million lines hash file (325 Mo), without --fork, with SingleRetestGuessed = N, with --rules=none, uses about 20 GB RAM.
And strange thing, the cracking rate slowly decrease along the cracking session. I would have thought that speed should be constant along the file, but in fact it's not. Shorter candidates are more likely to crack the corresponding hash, so maybe John is ordering the file to compute hashes for shorter candidates before long ones… That would be a great explanation.

I'm currently running the simplest pass on my 4.7 millions lines files, no rules, in order to decrease as fast as possible the amount of hashes on which I'll have to work harder. But I'm not sure it's the best move.
Any comment appreciated.

patpro

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.