|
Message-ID: <20190426001143.4983-12-namit@vmware.com> Date: Thu, 25 Apr 2019 17:11:31 -0700 From: Nadav Amit <namit@...are.com> To: Peter Zijlstra <peterz@...radead.org>, Borislav Petkov <bp@...en8.de>, Andy Lutomirski <luto@...nel.org>, Ingo Molnar <mingo@...hat.com> CC: <linux-kernel@...r.kernel.org>, <x86@...nel.org>, <hpa@...or.com>, Thomas Gleixner <tglx@...utronix.de>, Nadav Amit <nadav.amit@...il.com>, Dave Hansen <dave.hansen@...ux.intel.com>, <linux_dti@...oud.com>, <linux-integrity@...r.kernel.org>, <linux-security-module@...r.kernel.org>, <akpm@...ux-foundation.org>, <kernel-hardening@...ts.openwall.com>, <linux-mm@...ck.org>, <will.deacon@....com>, <ard.biesheuvel@...aro.org>, <kristen@...ux.intel.com>, <deneen.t.dock@...el.com>, Rick Edgecombe <rick.p.edgecombe@...el.com>, Nadav Amit <namit@...are.com>, Kees Cook <keescook@...omium.org>, Dave Hansen <dave.hansen@...el.com>, Masami Hiramatsu <mhiramat@...nel.org>, Jessica Yu <jeyu@...nel.org> Subject: [PATCH v5 11/23] x86/module: Avoid breaking W^X while loading modules When modules and BPF filters are loaded, there is a time window in which some memory is both writable and executable. An attacker that has already found another vulnerability (e.g., a dangling pointer) might be able to exploit this behavior to overwrite kernel code. Prevent having writable executable PTEs in this stage. In addition, avoiding having W+X mappings can also slightly simplify the patching of modules code on initialization (e.g., by alternatives and static-key), as would be done in the next patch. This was actually the main motivation for this patch. To avoid having W+X mappings, set them initially as RW (NX) and after they are set as RO set them as X as well. Setting them as executable is done as a separate step to avoid one core in which the old PTE is cached (hence writable), and another which sees the updated PTE (executable), which would break the W^X protection. Cc: Kees Cook <keescook@...omium.org> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Dave Hansen <dave.hansen@...el.com> Cc: Masami Hiramatsu <mhiramat@...nel.org> Cc: Jessica Yu <jeyu@...nel.org> Suggested-by: Thomas Gleixner <tglx@...utronix.de> Suggested-by: Andy Lutomirski <luto@...capital.net> Signed-off-by: Nadav Amit <namit@...are.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com> --- arch/x86/kernel/alternative.c | 28 +++++++++++++++++++++------- arch/x86/kernel/module.c | 2 +- include/linux/filter.h | 1 + kernel/module.c | 5 +++++ 4 files changed, 28 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 599203876c32..3d2b6b6fb20c 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -668,15 +668,29 @@ void __init alternative_instructions(void) * handlers seeing an inconsistent instruction while you patch. */ void *__init_or_module text_poke_early(void *addr, const void *opcode, - size_t len) + size_t len) { unsigned long flags; - local_irq_save(flags); - memcpy(addr, opcode, len); - local_irq_restore(flags); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ + + if (boot_cpu_has(X86_FEATURE_NX) && + is_module_text_address((unsigned long)addr)) { + /* + * Modules text is marked initially as non-executable, so the + * code cannot be running and speculative code-fetches are + * prevented. Just change the code. + */ + memcpy(addr, opcode, len); + } else { + local_irq_save(flags); + memcpy(addr, opcode, len); + local_irq_restore(flags); + sync_core(); + + /* + * Could also do a CLFLUSH here to speed up CPU recovery; but + * that causes hangs on some VIA CPUs. + */ + } return addr; } diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index b052e883dd8c..cfa3106faee4 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -87,7 +87,7 @@ void *module_alloc(unsigned long size) p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR + get_module_load_offset(), MODULES_END, GFP_KERNEL, - PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, + PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); if (p && (kasan_module_alloc(p, size) < 0)) { vfree(p); diff --git a/include/linux/filter.h b/include/linux/filter.h index 6074aa064b54..14ec3bdad9a9 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -746,6 +746,7 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp) static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) { set_memory_ro((unsigned long)hdr, hdr->pages); + set_memory_x((unsigned long)hdr, hdr->pages); } static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr) diff --git a/kernel/module.c b/kernel/module.c index 0b9aa8ab89f0..2b2845ae983e 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -1950,8 +1950,13 @@ void module_enable_ro(const struct module *mod, bool after_init) return; frob_text(&mod->core_layout, set_memory_ro); + frob_text(&mod->core_layout, set_memory_x); + frob_rodata(&mod->core_layout, set_memory_ro); + frob_text(&mod->init_layout, set_memory_ro); + frob_text(&mod->init_layout, set_memory_x); + frob_rodata(&mod->init_layout, set_memory_ro); if (after_init) -- 2.17.1
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.