Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue, 9 Aug 2011 20:12:26 +0400
From: Aleksey Cherepanov <>
Subject: 'search patterns' workflow, collaboration, helper scripts

Frank, all,

I would like to have some scripts to support contest workflow.
Probably this workflow is not specific for the contest. I wrote dirty
prototype. Explanations are below.

During the contest I figured out following workflow around patterns:
crack some passwords,
analyze them for patterns,
try first pattern,
filter out passwords of found pattern to simplify next patterns search,
do the same with next pattern,
when all patterns found crack using general attacks again.

Search of patterns was already considered as important at the previous
contest. I think such workflow could be automated in many aspects.

Important thing here is to filter out known patterns. It simplifies
search of new patterns and improves possible charsets because charsets
would not have known patterns.

However there were big gap between moment when I found pattern and even
wrote filter for it and moment when I wrote rule. Rules writing was
hard for me. Rules writing includes different steps and I would like
to simplify them all: learning rules (help to try small examples),
quality assurance (help to collect statistics about rule
effectiveness), using this rules in attacks (help to not forget to use
some rules against some types of passwords, help to stop ineffective

Also it would be good to have support for collaboration with ability
to have repository of rules and manage information of attacks.

So I wrote prototype of part of tool to support such workflow. It is
dirty and not finished yet. This prototype could be useful for
someone as is. However I expect some feedback to be sure that I chose
right way and to have more ideas. I would like to hear team members
that had problems with rules and especially Frank Dittrich which in
my opinion has a lot of ideas about rules but from different point of
view. Please!

I will explain what is already coded and how I see other part.

Scripts are written in bash, python and perl. Also they needs w3m.
Scripts are intended to be started in a separate directory. They
create many files during its works in this directory. Also they expect
John's 'run' directory to be in this directory. Example configuration
file is enclosed. It contains configuration for the contest
environment with uncracked password in 'uncracked' directory.

I tried to solve formats problem: you describe format of files once
and then if you run John using scripts they manage formats itself. So
there are 'typed_files.list' file. Each line contains format and path
to with hashes. It has some limitations: there should only one line
for one format. Currently it could work with many lines for one file
but with different formats. However later I will probably broke it.

So assume that we are in directory with unpacked scripts and we have
run directory here and there are uncracked hashes in files liste in
'typed_files.list' file.

First script to be used is It will initialize directory.

There is a script to fast try rule:
./ word rule1 rule2 ...
(All scripts have short description information in first lines of code)
It creates wordlist with word and config file with listed rules then
it calls John to show generated words (uses -stdout). So it did not
try rules against hashes it only provide an ability to easily look how
rule works.
$ ./ mi : c
words: 2  time: 0:00:00:00 DONE (Tue Aug  9 18:33:59 2011)  w/s: 100  current: Mi

There is a script to create/overwrite pattern definition. Pattern
definition includes name of wordlist file and list of rules just like takes. That script saves pattern definition, tries pattern
against already cracked passwords and show statistics including count
of matches with current cracked passwords set.
I would like to add such statistics as size of overlap with other
patterns and size of complement in other patterns.
$ echo mi > mi.lst
$ ./ mi_pattern mi.lst : c
words: 2  time: 0:00:00:00 DONE (Tue Aug  9 18:47:19 2011)  w/s: 200  current: Mi
Matches: 0
Percent of matches: 0

To start attack against pattern defined use ./ It will start
one John per hash type (per line in typed_files.list). Each attack has
its own number. It is like name of attack. So you will be able to
address attack by number later.
$ ./ mi
Your attack was numbered as 1

For custom attacks there is a launcher ./ This script
numbers attack and dispatches one John per hash type.
$ ./ -ru:single -w:mi.lst
Your attack was numbered as 2

To stop attack there is ./ It kills all Johns that work
within some attack. You could provide number of attack you want to
kill or you could omit this value then script will be applied to the
last attack you did.
$ ./ 1

However there are no ability to resume attack (yet).

To get statistics about certain attack use ./ It will show
table for attack with following columns: type (hash type), status,
c/s, guesses, time, time since last crack, tries since last crack (time
* c/s). This scripts uses 'john -status' and reads logs. So there are
two values in some columns: just value is obtained from John, value in
braces is obtained from log. It is needed because -status does not
work for finished sessions. In that case primary value would be 0
while value in braces would be actual.
$ ./ 1
  Type       Status    ...
md5_gen(22)  finished  ...
des          finished  ...
If you do not like tables in text you could view it with your
preferred browser using ./astatus.html file.

There is also ./ This script is intended to extract
results from pot. However I did not implement it fully. 'cracked' file
is a result of script's work. I would like to add ability to split cracked
passwords by hash type of source. Also I expect that script to prepare
filtered files, files that do not contain passwords that could be
caught with any rule we defined.

Other scripts are not intended for direct use.

And now some words about collaboration. I think there should be an
ability to publicize your attack (automatically in ./, to
report status of that attack (something on cron), to suspend and
resume attack on different machines (someone suspends attack and his
session files are sent to server, when someone want to resume that
attack these files are downloaded from server and session resumes, of
course all with command like ones above), to start equal attack (if
someone thinks that has better hardware while first attacker do not
want to suspend his attack or session files on the server is
corrupted). So if one could publicize his great rule he starts attack
with this rule, look results for fast hashes, suspends attack so
others will see results, analyze them and make decision about this
attack effectiveness.

As improvement I could imagine cluster management. I do not think
about it as about one big cluster. I imagine it like: each user has
its own cluster and manages it itself however there are support for
dispatching attacks on remote computers with load balancing, status
collection and so on. However it seems to be ineffective in case of
dumb force need (someone could prepare wordlist, split it into
small packes, register them as different attacks but this could be
done in more comfortable way with specific tools).

Also there was attempt to write script that finds similar passwords to
help to search patterns. However it seems to be separate thing that is
too big for this mail.

What do you think? Suggestions? Criticisms?

Aleksey Cherepanov

Download attachment "john_helper_scripts.tar.xz" of type "application/octet-stream" (4328 bytes)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.