|
Message-Id: <20210319212835.3928492-6-keescook@chromium.org> Date: Fri, 19 Mar 2021 14:28:34 -0700 From: Kees Cook <keescook@...omium.org> To: Thomas Gleixner <tglx@...utronix.de> Cc: Kees Cook <keescook@...omium.org>, Elena Reshetova <elena.reshetova@...el.com>, x86@...nel.org, Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>, Alexander Potapenko <glider@...gle.com>, Alexander Popov <alex.popov@...ux.com>, Ard Biesheuvel <ard.biesheuvel@...aro.org>, Jann Horn <jannh@...gle.com>, Vlastimil Babka <vbabka@...e.cz>, David Hildenbrand <david@...hat.com>, Mike Rapoport <rppt@...ux.ibm.com>, Andrew Morton <akpm@...ux-foundation.org>, Jonathan Corbet <corbet@....net>, Randy Dunlap <rdunlap@...radead.org>, kernel-hardening@...ts.openwall.com, linux-hardening@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: [PATCH v7 5/6] arm64: entry: Enable random_kstack_offset support Allow for a randomized stack offset on a per-syscall basis, with roughly 5 bits of entropy. (And include AAPCS rationale AAPCS thanks to Mark Rutland.) In order to avoid unconditional stack canaries on syscall entry (due to the use of alloca()), also disable stack protector to avoid triggering needless checks and slowing down the entry path. As there is no general way to control stack protector coverage with a function attribute[1], this must be disabled at the compilation unit level. This isn't a problem here, though, since stack protector was not triggered before: examining the resulting syscall.o, there are no changes in canary coverage (none before, none now). [1] a working __attribute__((no_stack_protector)) has been added to GCC and Clang but has not been released in any version yet: https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=346b302d09c1e6db56d9fe69048acb32fbb97845 https://reviews.llvm.org/rG4fbf84c1732fca596ad1d6e96015e19760eb8a9b Signed-off-by: Kees Cook <keescook@...omium.org> --- arch/arm64/Kconfig | 1 + arch/arm64/kernel/Makefile | 5 +++++ arch/arm64/kernel/syscall.c | 10 ++++++++++ 3 files changed, 16 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1f212b47a48a..2d0e5f544429 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -146,6 +146,7 @@ config ARM64 select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_PFN_VALID select HAVE_ARCH_PREL32_RELOCATIONS + select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_STACKLEAK select HAVE_ARCH_THREAD_STRUCT_WHITELIST diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index ed65576ce710..6cc97730790e 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -9,6 +9,11 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_insn.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) +# Remove stack protector to avoid triggering unneeded stack canary +# checks due to randomize_kstack_offset. +CFLAGS_REMOVE_syscall.o = -fstack-protector -fstack-protector-strong +CFLAGS_syscall.o += -fno-stack-protector + # Object file lists. obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ entry-common.o entry-fpsimd.o process.o ptrace.o \ diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index b9cf12b271d7..58227a1c207e 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -5,6 +5,7 @@ #include <linux/errno.h> #include <linux/nospec.h> #include <linux/ptrace.h> +#include <linux/randomize_kstack.h> #include <linux/syscalls.h> #include <asm/daifflags.h> @@ -43,6 +44,8 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno, { long ret; + add_random_kstack_offset(); + if (scno < sc_nr) { syscall_fn_t syscall_fn; syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)]; @@ -55,6 +58,13 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno, ret = lower_32_bits(ret); regs->regs[0] = ret; + + /* + * The AAPCS mandates a 16-byte (i.e. 4-bit) aligned SP at + * function boundaries. We want at least 5 bits of entropy so we + * must randomize at least SP[8:4]. + */ + choose_random_kstack_offset(get_random_int() & 0x1FF); } static inline bool has_syscall_work(unsigned long flags) -- 2.25.1
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.