Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130429122630.GA20935@openwall.com>
Date: Mon, 29 Apr 2013 16:26:30 +0400
From: Solar Designer <solar@...nwall.com>
To: crypt-dev@...ts.openwall.com
Subject: Re: Representing the crack resistance of a password.

On Mon, Apr 29, 2013 at 01:05:30AM -0500, Jeffrey Goldberg wrote:
> In a discussion on Twitter, Matt Weir has persuaded me that talking about the Shannon Entropy, H, of a password generation system doesn't get what we want. H doesn't capture appropriate facts about the distribution of passwords within the set of possible passwords.

That's correct.

> The far more meaningful notion would be "what is the prob of an attacker cracking a pw under that policy after N guesses". This, as I now understand, is not in general computable from H. (It is computable from H when the distribution is uniform, but the whole point is that humans do not select passwords uniformly from some set.)
> 
> I asked how we should characterize, or even name, this notion. I tossed out
> 
>   C(X, k) = 2 log_2 G(0.5, X, k)
> 
> where k is the key or password, X is this distribution, and G(p, X, k) is is the number of Guesses needed to hit k in X with probability p.

Regarding your G() above, see:

http://www.lysator.liu.se/~jc/mthesis/4_Entropy.html#SECTION00430000000000000000

for a formal definition of "Guessing entropy" and some discussion.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.