Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240818144057.GA27481@openwall.com>
Date: Sun, 18 Aug 2024 16:40:57 +0200
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: problem running out of memory - john 20240608.1

On Sun, Aug 18, 2024 at 07:12:08AM -0700, jeff wrote:
> I was wondering if using john in incremental mode, there is any way to 
> remember its state when I restart john.
> I know about the --restore option, but after cracking a significant 
> amount of
> hashes, I want to shrink the list of hashes via
> 
> ./john --format=nt --show=left pp8 > pp8-left
> 
> and then I want to increase the number of threads.
> If I understand correctly, --restore won't work if I shrink the list of 
> unfound hashes

Actually, it will work correctly.  You can just do that.

> or increase the thread count.

For OpenMP threads, you could do that too.  However, for fast hashes
such as NT we have no efficient OpenMP support, so instead of threads we
use forked processes.  Those differ in that each process gets its own
candidate password stream, and there's currently no way to change the
number of such candidate password streams after you've started a
session.  So unfortunately there's currently no good solution for you.

> So ideally, I would like to start a new john session, but say something like
> --incremental_start_where_I_left_off

We don't have a command-line option like the above, but more importantly
with --fork it'd have been several different start places.  When you
don't change the number of processes, this information is correctly read
back by --restore, so you don't need the kind of option you suggested.
If you change the number of processes, then even if such option existed
you wouldn't have the data to specify in there for the new processes -
it'd need to be a different number of starting places.

I suggest you also run some more extensive wordlist + rules attacks,
such as:

--wordlist --rules=all --dupe=0
--wordlist --rules=by-score --rules-stack=best-by-score

The second one of these is double application of rules.  And I suggest
you keep the dupe suppressor enabled for that one, because it's applied
before the stacked rules, so at much lower p/s rate.

I guess with the right attacks, you should be able to reduce the number
of remaining hashes in one day so dramatically that you can run way more
than 2 processes by day 2.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.