|
Message-ID: <BLU159-W1054968D84A24010E98898A4930@phx.gbl> Date: Sat, 31 Dec 2011 19:36:00 +0000 From: Alex Sicamiotis <alekshs@...mail.com> To: <john-users@...ts.openwall.com> Subject: RE: Rules for realistic words > If you like, also see other messages in that thread by clicking > thread-prev and thread-next. > > > By splicing words in human-like syllables, I achieved a hefty increase in effective cracking speed. > > Really? What exactly did you compare? Did you possibly feed knowledge > of already cracked passwords into your patterns - that is, is your test > in-sample or out-of-sample? If your test was an in-sample one, a > semi-fair comparison would be against a .chr file similarly generated > from your previously cracked passwords. And I say "semi-" because the > incremental mode was optimized for the out-of-sample case; it would be > easy to achieve better effective performance for in-sample tests, but > those have no practical relevance. > > Alexander I have a ~2500 DES password file since the mid 90s which has something like 980 uncracked left. I started off with cracker jack (3k c/s) on a 486, and have used john the ripper for my next cpus (k6, p3, duron & athlon, celerons). I'm usually cracking it in winters, for heat generation from overclocked cpus, lol... As for cracking techniques, over the last 16 years I've tried plenty of stuff... Incremental 1-5 chars exhausted, digits exhausted, IIRC alpha exhausted up to 6 chars etc, dictionaries (english + greeklish - which I had to make as there are very few available), rules etc - always within the grasp of my limited computing resources (just my pc). So far, the 1500+ cracked, have a 71 letter .chr produced. I generated such a .chr and saw a very effective cracking for the first 2-3 days (~60 passwords) then it dropped off significantly to ~two or one passwords a day (@ >8m c/s) as it deviated from the embedded patterns. That's when I figured that john probably assimilates some patterns out of the already cracked passwords in an intelligent way. Still, as you say, it is better than the off-sample that the casual incremental does. By the 8th day it was like 75 passwords. That was when I had around 1200 left uncracked. More recently I tried various self-made rules of greek names, names and numbers, names and symbols, a lot of consonant and vowel combos that apply to the greek language / greeklish expressions (standard + with a twist in the start/end), reduced charsets, korelogic rules on custom made dictionaries (too much redundancy with korelogic - that's what triggered my request for wordlist creation |uniq'ed), and broke another ~150 in 7 days - which reduced the uncracked stuff significantly. But this came at a price of too much human time in manual tweaks. Now I'm testing a 16 characters .chr and in the first day of cracking it threw up another 4 passwords in 24hrs - which is more or less ok, performance-wise, but nothing impressive. It'll end in 5 days and I won't tweak it further - I'll just wait for it to end and then try something else. My benchmark is always how I progress in cracking the uncracked stuff... If I'm seeing guesses 0 or guesses 1 for too long, or it goes like a day without anything cracked, I become impatient and change approach (time permitting of course... otherwise I simply leave it to crack with the standard rates). Normally, changing approaches etc is wasteful because you are overlapping the same stuff over and over. For example now that I'm using variation of small character files, it seems wasteful because the same would be also tried in larger .chr files. But my rationale is that if I eliminate, say, 10% of the remaining passwords in a short period, then this period has saved me a very large time for the rest of the 90% - so it's not really wasted.
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.