Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFs9wnW4z_VgjJJE6i0xK4jnC7ZEfpnwnbfBDTWRUDe_S8OJ9A@mail.gmail.com>
Date: Sun, 14 Mar 2021 16:55:55 +0100
From: MichaƂ Majchrowicz <sectroyer@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: password patterns (was: Multi-gpu setup)

> > Incremental mode currently has a scalability issue with large node
> > counts - the portions of work might become too non-uniform, and
> > eventually there might not be enough of them at all.  It should work
> > OK up to about 100 nodes, but will gradually start having issues at
> > multiple hundred and into the thousands.
>
> And that's the normal full incremental mode. Using eg. --inc:digits will
> hit poor territory with much lower node counts.
Ok no problem, I will keep the node value below 100. I have thought
how to perform this test and I think best would be to use set of 6
hashes. From those 4 we already know can benefit from passwords
patterns as that's how they were cracked. Other 2 we don't know what
they are but we already know what they are not and potentially they
should follow some pattern anyways (if not completely random, but
doubt that). I have noted what tests I have already performed so I
know more or less how long it took to crack them (I can do more
precise calculations on later by simply replaying same sessions)
therefore I think running incremental mode for a week should be enough
to test those assertions. By the way incremental mode also supports
sessions? Ergo I could resume in case of power shortage or something
else keeping node and other settings? Probably pointless question but
wanted to make sure there isn't any bug related to session and
incremental mode? :) In the future I will have generate a set of
passwords following different passwords patterns and check how
incremental mode handles those in comparison to "manual way" :) For
that i will book time on more modern hardware to speed the test up but
for now I think for now this test will me rough idea on how those two
compare. I think I didn't miss anything important planning this test.
However if you have any suggestions on how to do this run, please let
me know.

> But what I primarily wanted to air is I wonder if our current code will
> behave worse with very large number of nodes even if the *ranges* is not
> very small? For example: Does -node=1-30/100 or -node=1-300/1000 have
> more such issues than --node=1-3/10? I intend to have a look at that. If
> it does, we might want to tweak it: Splitting a job by eg. percentage
> can be convenient.
Yes it definitely is :D

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.