|
Message-ID: <20160713075314.GA32700@gmail.com> Date: Wed, 13 Jul 2016 09:53:31 +0200 From: Ingo Molnar <mingo@...nel.org> To: Andy Lutomirski <luto@...nel.org> Cc: x86@...nel.org, linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, Borislav Petkov <bp@...en8.de>, Nadav Amit <nadav.amit@...il.com>, Kees Cook <keescook@...omium.org>, Brian Gerst <brgerst@...il.com>, "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>, Linus Torvalds <torvalds@...ux-foundation.org>, Josh Poimboeuf <jpoimboe@...hat.com>, Jann Horn <jann@...jh.net>, Heiko Carstens <heiko.carstens@...ibm.com> Subject: Re: [PATCH v5 14/32] x86/mm/64: Enable vmapped stacks * Andy Lutomirski <luto@...nel.org> wrote: > This allows x86_64 kernels to enable vmapped stacks. There are a > couple of interesting bits. > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -92,6 +92,7 @@ config X86 > select HAVE_ARCH_TRACEHOOK > select HAVE_ARCH_TRANSPARENT_HUGEPAGE > select HAVE_EBPF_JIT if X86_64 > + select HAVE_ARCH_VMAP_STACK if X86_64 So what is the performance impact? Because I think we should consider enabling this feature by default on x86 - but the way it's selected here it will be default-off. On the plus side: the debuggability and reliability improvements are real and making it harder for exploits to use kernel stack overflows is a nice bonus as well. There's two performance effects: - vmalloc now potentially moves into the thread pool create/destroy hot path. - we use 4K TLBs for kernel stacks instead of 2MB TLBs. The TLB effect should be relatively modest on modern CPUs, given that the kernel stack size is limited and 4K TLBs are plenty. The vmalloc() part should be measured I suspect. Thanks, Ingo
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.