Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 20 Jun 2016 21:01:40 -0700
From: Linus Torvalds <>
To: Andy Lutomirski <>
Cc: "the arch/x86 maintainers" <>, 
	Linux Kernel Mailing List <>, 
	"" <>, Borislav Petkov <>, 
	Nadav Amit <>, Kees Cook <>, 
	Brian Gerst <>, 
	"" <>, Josh Poimboeuf <>, 
	Jann Horn <>, Heiko Carstens <>
Subject: Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core)

On Mon, Jun 20, 2016 at 4:43 PM, Andy Lutomirski <> wrote:
> On my laptop, this adds about 1.5┬Ás of overhead to task creation,
> which seems to be mainly caused by vmalloc inefficiently allocating
> individual pages even when a higher-order page is available on the
> freelist.

I really think that problem needs to be fixed before this should be merged.

The easy fix may be to just have a very limited re-use of these stacks
in generic code, rather than try to do anything fancy with multi-page
allocations. Just a few of these allocations held in reserve (perhaps
make the allocations percpu to avoid new locks).

It won't help for a thundering herd problem where you start tons of
new threads, but those don't tend to be short-lived ones anyway. In
contrast, I think one common case is the "run shell scripts" that runs
tons and tons of short-lived processes, and having a small "stack of
stacks" would probably catch that case very nicely. Even a
single-entry cache might be ok, but I see no reason to not make it be
perhaps three or four stacks per CPU.

Make the "thread create/exit" sequence go really fast by avoiding the
allocation/deallocation, and hopefully catching a hot cache and TLB
line too.

Performance is not something that we add later. If the first version
of the patch series doesn't perform well, it should not be considered


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.