Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ9ii1G5X8Fth=dYANYD6KduomtWwk=G+pe1mMW=-3tma2T4fw@mail.gmail.com>
Date: Tue, 19 Nov 2024 15:46:10 -0500
From: Matt Weir <cweir@...edu>
To: john-users@...ts.openwall.com
Subject: Re: Markov phrases in john

>>  One thing that surprised me is that your top 25 for training
>> on RockYou Full ...

Yup I included dupes. Two options are A) Our versions of Rockyou are
slightly different (there are some variations of the list floating around)
or B) The order of the passwords during training matters. For reference my
version of RockYou has been randomized to make it easier to slice up. I can
run it on the original list when I have some time to see if that's the
cause.

>> I thought you'd also try the tokenizer along with
>> OMEN ...

It should be pretty easy, but fell outside my initial research. By easy I
suspect it's an hour or two of work simply due to all the "known unknowns"
that pop up when trying something new. My thought process is:

1) Create the "tokenized" training set just like we do with Incremental
(minus putting it in potfile format)
2) Run an OMEN training on it. Aka python3 pcfg_trainer.py -t
TOKIZED_TRAINING_FILE.txt -r OMEN_TOK_TEST -c 0
-- The '-c 0' option sets the coverage to be 0 so it will only generate
OMEN guesses. Side note, setting '-c 1.0' means no OMEN guesses will be
generated. Default is 0.6
3) Generate guesses using the PCFG tool and pipe it into JtR with the
external mode. Aka: python3 pcfg_guesser -r OMEN_TOK_TEST | ./john --stdin
--external=untokinize_omen --stdout

The "known unknown" for me where things could go sideways is passing the
"tokenize control characters" via pipes and if weirdness occurs in my
generating tool (pcfg) trying to output them. It might work, but it might
not.

If I have time I might try to look into the two items above, but I'm not
able to spend a lot of time sitting down to run tests right now. So this
might be a while.

As a full disclaimer, someone correctly corrected me on the Hashcat forum
where they pointed out there are some differences in my implementation of
OMEN and the official Rub-SysSec version of it. I'll admit I totally forgot
about that since I wrote that code 6 years ago and at least in my memory
they were equivalent. Turns out they are not. The PCFG code is easiest for
me to use (and the OMEN code at least follows the spirit of what Rub-SysSec
proposed), so I'm going to stick to that, but that might be a future area
to look into for anyone interested.

>> If so, maybe these options would help ...

Thanks! I appreciate it. I think one gap of academic research in general is
that most papers only model very short cracking sessions so anything we can
do to make analyzing longer cracking sessions easier will help!

Cheers,
Matt/Lakiw

On Mon, Nov 18, 2024 at 12:05 AM Solar Designer <solar@...nwall.com> wrote:

> On Sun, Nov 17, 2024 at 06:20:51PM -0500, Matt Weir wrote:
> > I just published a blog post comparing Tokenizer against other attack
> > types. Link:
> >
> https://reusablesec.blogspot.com/2024/11/analyzing-jtrs-tokenizer-attack-round-1.html
> >
> > As a disclaimer, due to falling down a number of "non-tokenizer related"
> > rabbit holes as well as only being able to work on this work in short
> > bursts, I started writing this blog entry a couple of weeks ago, and I
> > didn't want to pivot and lose even more time. So all the tests utilize
> the
> > original version of tokenizer and don't include the improvements
> > discussed since then. Still I hope this research is helpful!
>
> Thank you for running these tests!
>
> I only skimmed.  One thing that surprised me is that your top 25 for
> training on RockYou Full (including dupes, right?) is different from
> what I had posted in here at all (even if similar).  Why would that be,
> if you say you use the original version of the script and it seems to
> still have been latest when I posted my top 25 on October 30.
>
> > The short summary of the results are:
> > - Tokenizer performs better than Incremental mode in the first 5 billion
> > guesses
> > - OMEN performs better than Tokenizer in the first 5 billion guesses. But
> > OMEN has a number of implementation challenges where an Incremental based
> > attack can still be more practical.
>
> That's interesting.  I thought you'd also try the tokenizer along with
> OMEN - is that somehow difficult to do?
>
> > - When trying to simulate multi-stage cracking attacks I really need a
> > better way to record much longer cracking sessions (aka trillions of
> > guesses). While Tokenizer appears to be a respectable attack to run after
> > running a large rules/wordlist attack, the fact that I only ran it for 5
> > billion guesses didn't make the results
> > trustworthy/statistically-significant. Basically the result is the test
> > needs to be redesigned vs. learning much about the actual attacks ;p
>
> As I understand, the reason you say 5 billion is not enough in this test
> is that it cracked only a tiny fraction of total passwords after your
> prior wordlist+rules run had cracked a much larger fraction and probably
> produced many more candidates.  And by "recording" you mean keeping
> track of the number of passwords cracked at different candidate counts.
>
> If so, maybe these options would help:
>
> --max-candidates=[-]N      Gracefully exit after this many candidates
> tried.
> --[no-]crack-status        Emit a status line whenever a password is
> cracked
> --progress-every=N         Emit a status line every N seconds
>
> along with --verbosity=1 and cracking of hashes with JtR itself (not by
> your external tool).  Also see the AutoStatus external mode, although to
> use it in this case you'd need to roll its logic into Untokenize.
>
> Alexander
>

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.