Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez2Z8=0__eoQ+Ekp=EApawZXR4ec_xd2TVPQExLoyMwtRQ@mail.gmail.com>
Date: Fri, 18 Oct 2019 19:12:52 +0200
From: Jann Horn <jannh@...gle.com>
To: Sami Tolvanen <samitolvanen@...gle.com>
Cc: Will Deacon <will@...nel.org>, Catalin Marinas <catalin.marinas@....com>, 
	Steven Rostedt <rostedt@...dmis.org>, Ard Biesheuvel <ard.biesheuvel@...aro.org>, 
	Dave Martin <Dave.Martin@....com>, Kees Cook <keescook@...omium.org>, 
	Laura Abbott <labbott@...hat.com>, Mark Rutland <mark.rutland@....com>, 
	Nick Desaulniers <ndesaulniers@...gle.com>, 
	clang-built-linux <clang-built-linux@...glegroups.com>, 
	Kernel Hardening <kernel-hardening@...ts.openwall.com>, linux-arm-kernel@...ts.infradead.org, 
	kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 18/18] arm64: implement Shadow Call Stack

On Fri, Oct 18, 2019 at 6:16 PM Sami Tolvanen <samitolvanen@...gle.com> wrote:
> This change implements shadow stack switching, initial SCS set-up,
> and interrupt shadow stacks for arm64.
[...]
> +static inline void scs_save(struct task_struct *tsk)
> +{
> +       void *s;
> +
> +       asm volatile("mov %0, x18" : "=r" (s));
> +       task_set_scs(tsk, s);
> +}
> +
> +static inline void scs_load(struct task_struct *tsk)
> +{
> +       asm volatile("mov x18, %0" : : "r" (task_scs(tsk)));
> +       task_set_scs(tsk, NULL);
> +}

These things should probably be __always_inline or something like
that? If the compiler decides not to inline them (e.g. when called
from scs_thread_switch()), stuff will blow up, right?

> +static inline void scs_thread_switch(struct task_struct *prev,
> +                                    struct task_struct *next)
> +{
> +       scs_save(prev);
> +       scs_load(next);
> +
> +       if (unlikely(scs_corrupted(prev)))
> +               panic("corrupted shadow stack detected inside scheduler\n");
> +}
[...]
> +#ifdef CONFIG_SHADOW_CALL_STACK
> +DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr);
> +#endif

If an attacker has a leak of some random function pointer to find the
ASLR base address, they'll know where irq_shadow_call_stack_ptr is.
With an arbitrary read (to read
irq_shadow_call_stack_ptr[sched_getcpu()]) followed by an arbitrary
write, they'd be able to overwrite the shadow stack. Or with just an
arbitrary write without a read, they could change
irq_shadow_call_stack_ptr[sched_getcpu()] to point elsewhere. This is
different from the intended protection level according to
<https://clang.llvm.org/docs/ShadowCallStack.html#security>, which
talks about "a runtime that avoids exposing the address of the shadow
call stack to attackers that can read arbitrary memory". Of course,
that's extremely hard to implement in the context of the kernel, where
you can see all the memory management data structures and all physical
memory.

You might want to write something in the cover letter about what the
benefits of this mechanism compared to STACKPROTECTOR are in the
context of the kernel, including a specific description of which types
of attacker capabilities this is supposed to defend against.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.