Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALCETrVxFDQGJeQX5k39pM3TvqH4q10SduPY=Os_RiJGEg_0Hg@mail.gmail.com>
Date: Wed, 21 Dec 2016 17:54:26 -0800
From: Andy Lutomirski <luto@...capital.net>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: George Spelvin <linux@...encehorizons.net>, "Jason A. Donenfeld" <Jason@...c4.com>, 
	Andi Kleen <ak@...ux.intel.com>, David Miller <davem@...emloft.net>, 
	David Laight <David.Laight@...lab.com>, "Daniel J . Bernstein" <djb@...yp.to>, 
	Eric Biggers <ebiggers3@...il.com>, Eric Dumazet <eric.dumazet@...il.com>, 
	Hannes Frederic Sowa <hannes@...essinduktion.org>, 
	Jean-Philippe Aumasson <jeanphilippe.aumasson@...il.com>, 
	"kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>, 
	Linux Crypto Mailing List <linux-crypto@...r.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, Network Development <netdev@...r.kernel.org>, 
	Tom Herbert <tom@...bertland.com>, "Theodore Ts'o" <tytso@....edu>, 
	Vegard Nossum <vegard.nossum@...il.com>
Subject: Re: HalfSipHash Acceptable Usage

On Wed, Dec 21, 2016 at 9:25 AM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Wed, Dec 21, 2016 at 7:55 AM, George Spelvin
> <linux@...encehorizons.net> wrote:
>>
>> How much does kernel_fpu_begin()/kernel_fpu_end() cost?
>
> It's now better than it used to be, but it's absolutely disastrous
> still. We're talking easily many hundreds of cycles. Under some loads,
> thousands.
>
> And I warn you already: it will _benchmark_ a hell of a lot better
> than it will work in reality. In benchmarks, you'll hit all the
> optimizations ("oh, I've already saved away all the FP registers, no
> need to do it again").
>
> In contrast, in reality, especially with things like "do it once or
> twice per incoming packet", you'll easily hit the absolute worst
> cases, where not only does it take a few hundred cycles to save the FP
> state, you'll then return to user space in between packets, which
> triggers the slow-path return code and reloads the FP state, which is
> another few hundred cycles plus.

Hah, you're thinking that the x86 code works the way that Rik and I
want it to work, and you just made my day. :)  What actually happens
is that the state is saved in kernel_fpu_begin() and restored in
kernel_fpu_end(), and it'll take a few hundred cycles best case.  If
you do it a bunch of times in a loop, you *might* trigger a CPU
optimization that notices that the state being saved is the same state
that was just restored, but you're still going to pay the full restore
code each round trip no matter what.

The code is much clearer in 4.10 kernels now that I deleted the unused
"lazy" branches.

>
> Similarly, in benchmarks you'll hit the "modern CPU's power on the AVX
> unit and keep it powered up for a while afterwards", while in real
> life you would quite easily hit the "oh, AVX is powered down because
> we were idle, now it powers up at half speed which is another latency
> hit _and_ the AVX unit won't run full out anyway".

I *think* that was mostly fixed in Broadwell or thereabouts (in terms
of latency -- throughput and power consumption still suffers).

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.