|
Message-ID: <CAJcbSZHJ_Jy=dr4Pc3-o_Bz340cLRgu79Up5iWptwaiObwN3Hw@mail.gmail.com> Date: Tue, 15 Aug 2017 07:58:47 -0700 From: Thomas Garnier <thgarnie@...gle.com> To: Daniel Micay <danielmicay@...il.com> Cc: Ingo Molnar <mingo@...nel.org>, Herbert Xu <herbert@...dor.apana.org.au>, "David S . Miller" <davem@...emloft.net>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H . Peter Anvin" <hpa@...or.com>, Peter Zijlstra <peterz@...radead.org>, Josh Poimboeuf <jpoimboe@...hat.com>, Arnd Bergmann <arnd@...db.de>, Matthias Kaehlcke <mka@...omium.org>, Boris Ostrovsky <boris.ostrovsky@...cle.com>, Juergen Gross <jgross@...e.com>, Paolo Bonzini <pbonzini@...hat.com>, Radim Krčmář <rkrcmar@...hat.com>, Joerg Roedel <joro@...tes.org>, Tom Lendacky <thomas.lendacky@....com>, Andy Lutomirski <luto@...nel.org>, Borislav Petkov <bp@...e.de>, Brian Gerst <brgerst@...il.com>, "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>, "Rafael J . Wysocki" <rjw@...ysocki.net>, Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>, Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>, Paul Gortmaker <paul.gortmaker@...driver.com>, Chris Metcalf <cmetcalf@...lanox.com>, Andrew Morton <akpm@...ux-foundation.org>, "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>, Nicolas Pitre <nicolas.pitre@...aro.org>, Christopher Li <sparse@...isli.org>, "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>, Lukas Wunner <lukas@...ner.de>, Mika Westerberg <mika.westerberg@...ux.intel.com>, Dou Liyang <douly.fnst@...fujitsu.com>, Daniel Borkmann <daniel@...earbox.net>, Alexei Starovoitov <ast@...nel.org>, Masahiro Yamada <yamada.masahiro@...ionext.com>, Markus Trippelsdorf <markus@...ppelsdorf.de>, Steven Rostedt <rostedt@...dmis.org>, Kees Cook <keescook@...omium.org>, Rik van Riel <riel@...hat.com>, David Howells <dhowells@...hat.com>, Waiman Long <longman@...hat.com>, Kyle Huey <me@...ehuey.com>, Peter Foley <pefoley2@...oley.com>, Tim Chen <tim.c.chen@...ux.intel.com>, Catalin Marinas <catalin.marinas@....com>, Ard Biesheuvel <ard.biesheuvel@...aro.org>, Michal Hocko <mhocko@...e.com>, Matthew Wilcox <mawilcox@...rosoft.com>, "H . J . Lu" <hjl.tools@...il.com>, Paul Bolle <pebolle@...cali.nl>, Rob Landley <rob@...dley.net>, Baoquan He <bhe@...hat.com>, "the arch/x86 maintainers" <x86@...nel.org>, Linux Crypto Mailing List <linux-crypto@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, xen-devel@...ts.xenproject.org, kvm list <kvm@...r.kernel.org>, Linux PM list <linux-pm@...r.kernel.org>, linux-arch <linux-arch@...r.kernel.org>, linux-sparse@...r.kernel.org, Kernel Hardening <kernel-hardening@...ts.openwall.com>, Linus Torvalds <torvalds@...ux-foundation.org>, Peter Zijlstra <a.p.zijlstra@...llo.nl>, Borislav Petkov <bp@...en8.de> Subject: Re: x86: PIE support and option to extend KASLR randomization On Tue, Aug 15, 2017 at 7:47 AM, Daniel Micay <danielmicay@...il.com> wrote: > On 15 August 2017 at 10:20, Thomas Garnier <thgarnie@...gle.com> wrote: >> On Tue, Aug 15, 2017 at 12:56 AM, Ingo Molnar <mingo@...nel.org> wrote: >>> >>> * Thomas Garnier <thgarnie@...gle.com> wrote: >>> >>>> > Do these changes get us closer to being able to build the kernel as truly >>>> > position independent, i.e. to place it anywhere in the valid x86-64 address >>>> > space? Or any other advantages? >>>> >>>> Yes, PIE allows us to put the kernel anywhere in memory. It will allow us to >>>> have a full randomized address space where position and order of sections are >>>> completely random. There is still some work to get there but being able to build >>>> a PIE kernel is a significant step. >>> >>> So I _really_ dislike the whole PIE approach, because of the huge slowdown: >>> >>> +config RANDOMIZE_BASE_LARGE >>> + bool "Increase the randomization range of the kernel image" >>> + depends on X86_64 && RANDOMIZE_BASE >>> + select X86_PIE >>> + select X86_MODULE_PLTS if MODULES >>> + default n >>> + ---help--- >>> + Build the kernel as a Position Independent Executable (PIE) and >>> + increase the available randomization range from 1GB to 3GB. >>> + >>> + This option impacts performance on kernel CPU intensive workloads up >>> + to 10% due to PIE generated code. Impact on user-mode processes and >>> + typical usage would be significantly less (0.50% when you build the >>> + kernel). >>> + >>> + The kernel and modules will generate slightly more assembly (1 to 2% >>> + increase on the .text sections). The vmlinux binary will be >>> + significantly smaller due to less relocations. >>> >>> To put 10% kernel overhead into perspective: enabling this option wipes out about >>> 5-10 years worth of painstaking optimizations we've done to keep the kernel fast >>> ... (!!) >> >> Note that 10% is the high-bound of a CPU intensive workload. > > The cost can be reduced by using -fno-plt these days but some work > might be required to make that work with the kernel. > > Where does that 10% estimate in the kernel config docs come from? I'd > be surprised if it really cost that much on x86_64. That's a realistic > cost for i386 with modern GCC (it used to be worse) but I'd expect > x86_64 to be closer to 2% even for CPU intensive workloads. It should > be very close to zero with -fno-plt. I got 8 to 10% on hackbench. Other benchmarks were 4% or lower. I will do look at more recent compiler and no-plt as well. -- Thomas
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.