Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121004011344.GA25522@openwall.com>
Date: Thu, 4 Oct 2012 05:13:44 +0400
From: Solar Designer <solar@...nwall.com>
To: Colin Percival <cperciva@...snap.com>
Cc: crypt-dev@...ts.openwall.com
Subject: Re: using scrypt for user authentication

Colin,

First of all, thank you for these cost estimates!

On Mon, Oct 01, 2012 at 01:49:15PM -0700, Colin Percival wrote:
> Here's my back-of-the-envelope calculation:  Assume 100 ms / 32 MB, a 25%
> load factor, and we're using EC2 c1.medium instances at $0.165/hour (which
> is far more than you'd pay setting up a cluster).  That's 2 cpus * 36000
> hashes per cpu-hour * 25% load factor = 18000 hashes per hour, or slightly
> under 10 microdollars per hash.  I can't think of any website I enter a
> password into more than five times a day -- most are far less than that,
> since like most people, I stay cookied on twitter/facebook/etc. -- so I
> would cost at most $0.02 per year.

$0.02 per user per share is actually quite a lot for some orgs.  Note
that this is not 100% of what a user costs, but it's _extra_ cost for
better password security (which might not provide any ROI, unless the
org brags about it for good publicity).

Let's consider Facebook.  Wikipedia says they have 955 million of active
users as of June 2012, and their 2011 revenue was $3.71 billion.  That's
about $4 per user, so $0.02 would be 0.5% - I'd say it's quite a lot.
Now let's consider their profit rather than revenue.  The data I was
able to find quickly (by looking up the FB ticker) is 46.77B market
capitalization and 121.87 P/E, which gives the earnings estimate at 384M.
(I arrive at the same number going from EPS 0.18 and share count at 2.14B
as well.)  So that's only about $0.40 per user per year.  $0.02 would
thus be 5% - truly a lot!  Granted, Facebook is going to try to improve
their monetization a lot, and investors expect them to (which is why P/E
is so high) - yet I guess the current low earnings affect what the
company would be willing to spend.

I think that 25% load factor (average / worst-spike) is optimistic.
Note that we need to consider even very short duration spikes.

The expectation that scrypt will scale to 2 CPU cores perfectly is also
optimistic.  Well, it might scale to 2, but likely not much/any further,
unless we specifically revise it for scalability or/and have it use
caches rather than DRAM on a shared bus.  In my testing on 12-core
(24 logical), it only scaled to 2x or 3x the throughput of a single
instance at best (and in some of the tests it did not scale at all -
that is, same throughput for 1 or 24 instances was seen), unless I
reduced its working set size to fit in L3 or smaller.  Scalability can
be improved by increasing the Salsa20 round count, but then memory usage
(size) in under 100 ms becomes even lower.  These trade-offs are rather
fundamental, I think.

> For typical sites, where I stay cookied for weeks at a time, it would be
> under $0.001 per year; or even less if they set up their own password hash
> cluster instead of renting EC2 instances by the hour.  I'm having trouble
> imagining a company which can't afford this.

OK.  They may be able to afford this.  I am not saying that they can't.
Rather, I guess they'd choose not to spend a non-negligible amount of
money (in absolute terms, as well as a percentage of earnings per user)
on something with no or little expected ROI.

As discussed off-list, we need a hashing method reasonably usable (and
consistently stronger than bcrypt) for the 1 ms to 100 ms range, which I
think is the reasonable range to support for user authentication.
Supporting 1 ms will ease adoption.

> > Anyhow, even at 100 ms the 32 MB upper limit on what we can use with
> > scrypt bothers me.  I want it to be a lot higher.  If we consider the
> > memory bandwidth as the hard limiting factor on how much memory we can
> > use in 100 ms, it'd be a lot more than 32 MB.
> 
> Agreed.  It's possible that I erred too far on the side of caution in using
> salsa20/8 in the construction; at Passwords'12 (Oslo, early December) I'm
> hoping I'll get a chance to corner Joan Daemen and get some input from him
> about this -- I wouldn't be surprised if a reduced-round version of Keccak
> turned out to be optimal for this.

Cool!

A concern: let's not forget that we should also use the CPU efficiently.
This probably means having and using more parallelism at instruction and
SIMD level.

I think that a single instance of Salsa20 (with whatever round count)
doesn't use a modern CPU fully.  It does use 128-bit vectors, but I
think there's not enough parallelism between instructions to fully use
the available execution units and fully hide instruction latencies.
Also, as you're aware, we'll need to support 256-bit soon.

> > Do you have thoughts on a possible revision of scrypt to allow it to use
> > more memory at these low durations?
> 
> I don't want to retroactively redefine scrypt, but that doesn't mean that
> we can't discuss a possible replacement for it. :-)

Sounds good.

I was thinking that maybe we could extend rather than redefine or
replace scrypt.  That is, add more parameters to scrypt.  With some
values for the additional parameters, it'd be the current scrypt.

Thanks again,

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.