|
Message-ID: <CALCETrVnCAzj0atoE1hLjHgmWjWAKVdSLm-UMtukUwWgr7-N9Q@mail.gmail.com> Date: Wed, 5 Feb 2020 17:17:11 -0800 From: Andy Lutomirski <luto@...nel.org> To: Kristen Carlson Accardi <kristen@...ux.intel.com> Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>, Arjan van de Ven <arjan@...ux.intel.com>, Kees Cook <keescook@...omium.org>, Rick Edgecombe <rick.p.edgecombe@...el.com>, X86 ML <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>, Kernel Hardening <kernel-hardening@...ts.openwall.com> Subject: Re: [RFC PATCH 08/11] x86: Add support for finer grained KASLR On Wed, Feb 5, 2020 at 2:39 PM Kristen Carlson Accardi <kristen@...ux.intel.com> wrote: > > At boot time, find all the function sections that have separate .text > sections, shuffle them, and then copy them to new locations. Adjust > any relocations accordingly. > > + sort(base, num_syms, sizeof(int), kallsyms_cmp, kallsyms_swp); Hah, here's a huge bottleneck. Unless you are severely memory-constrained, never do a sort with an expensive swap function like this. Instead allocate an array of indices that starts out as [0, 1, 2, ...]. Sort *that* where the swap function just swaps the indices. Then use the sorted list of indices to permute the actual data. The result is exactly one expensive swap per item instead of one expensive swap per swap. --Andy
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.