Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ9ii1EgFKsZE=qUv51TPZT7+2BoY+xJFcB+kxPfQe9qHFdoeQ@mail.gmail.com>
Date: Tue, 26 Sep 2017 09:23:29 -0400
From: Matt Weir <cweir@...edu>
To: "john-users@...ts.openwall.com" <john-users@...ts.openwall.com>
Subject: Re: 'PassGAN: A Deep Learning Approach'

Thanks for sending that along Jeoren!

I've gone through that paper a number of times now. As background for
the people on this mailinglist who don't want to read it, the paper
describes using Generated Adversarial Networks (GANs) to train a
neural network to create password guesses. It a ways, it is very
similar to the earlier work done by CMU on using neural networks to
crack passwords. CMU's code is here:

https://github.com/cupslab/neural_network_cracking

And if you actually want to get that code to run I highly recommend
checking out Maximilian's tutorial here:

https://www.password-guessing.org/blog/post/cupslab-neural-network-cracking-manual/

Both the PassGAN and the CMU teams generate guesses much like JtR
--Markov and --Incremental modes by using the conditional
probabilities of letters appearing together. For example, if the first
letter is a 'q' then then next letter will likely be a 'u'. A more
sophisticated example would be, if the first three letters are '123',
then the next letter will likely be a '4'.

Where PassGAN is different from the CMU approach is mostly from the
training stage as far as I can tell. While I can't directly compare
the two attacks since I'm not aware of the PassGAN code being publicly
released, at least based on reading the papers the CMU approach is
much, much more effective.

Actually the PassGAN paper is a bit of a mess when it comes to looking
at other password cracking approaches. For example it uses the
SpiderLab ruleset for JtR vs the default one, or --single. The actual
results of PassGAN were very poor, and while the team said that
combining PassGAN with Hashcat's best64 ruleset + wordlist cracked
more passwords than just running best64, they didn't bother to
contrast that with other attack modes + best64.  Long story short, the
research is interesting but if you are looking to use neural networks
for generating password guesses the current go-to is still the CMU
codebase.

Matt

On Tue, Sep 26, 2017 at 6:33 AM, Jeroen <spam@...lab.nl> wrote:
> FYI: [1709.00440] PassGAN: A Deep Learning Approach for Password Guessing
> @<https://arxiv.org/abs/1709.00440>.
>
> Cheers,
>
> Jeroen
>
>

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.