Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161028051856.GE19531@linaro.org>
Date: Fri, 28 Oct 2016 14:18:57 +0900
From: AKASHI Takahiro <takahiro.akashi@...aro.org>
To: Mark Rutland <mark.rutland@....com>
Cc: kernel-hardening@...ts.openwall.com,
	"Reshetova, Elena" <elena.reshetova@...el.com>,
	David Windsor <dave@...gbits.org>,
	Kees Cook <keescook@...omium.org>,
	Hans Liljestrand <ishkamiel@...il.com>,
	Colin Vidal <colin@...dal.org>
Subject: Re: [RFC 2/2] arm: implementation for
 HARDENED_ATOMIC

On Thu, Oct 27, 2016 at 02:24:36PM +0100, Mark Rutland wrote:
> Hi,
> 
> On Tue, Oct 18, 2016 at 04:59:21PM +0200, Colin Vidal wrote:
> > This adds arm-specific code in order to support HARDENED_ATOMIC
> > feature. When overflow is detected in atomic_t, atomic64_t or
> > atomic_long_t, an exception is raised and call
> > hardened_atomic_overflow.
> 
> I have some comments below, but for real review this needs to go via the
> linux-arm-kernel list.

Yeah, definitely, but

> > diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
> > index 66d0e21..fdaee17 100644
> > --- a/arch/arm/include/asm/atomic.h
> > +++ b/arch/arm/include/asm/atomic.h
> > @@ -17,18 +17,52 @@
> >  #include <linux/irqflags.h>
> >  #include <asm/barrier.h>
> >  #include <asm/cmpxchg.h>
> > +#include <linux/bug.h>
> >  
> >  #define ATOMIC_INIT(i)	{ (i) }
> >  
> >  #ifdef __KERNEL__
> >  
> > +#ifdef CONFIG_HARDENED_ATOMIC
> > +#define HARDENED_ATOMIC_INSN "bkpt 0xf103"
> 
> Please put the immediate in a #define somewhere.
> 
> What about thumb2 kernels?
> 
> > +#define _ASM_EXTABLE(from, to)			\
> > +	".pushsection __ex_table,\"a\"\n"	\
> > +	".align 3\n"				\
> > +	".long "#from","#to"\n"			\
> > +	".popsection"
> > +#define __OVERFLOW_POST				\
> > +	"bvc 3f\n"				\
> > +	"2: "HARDENED_ATOMIC_INSN"\n"		\
> > +	"3:\n"
> > +#define __OVERFLOW_POST_RETURN			\
> > +	"bvc 3f\n"				\
> > +	"mov %0,%1\n"				\
> > +	"2: "HARDENED_ATOMIC_INSN"\n"		\
> > +	"3:\n"
> > +#define __OVERFLOW_EXTABLE			\
> > +	"4:\n"					\
> > +	_ASM_EXTABLE(2b, 4b)
> > +#else
> > +#define __OVERFLOW_POST
> > +#define __OVERFLOW_POST_RETURN
> > +#define __OVERFLOW_EXTABLE
> > +#endif
> > +
> 
> All this should live close to the assembly using it, to make it possible
> to follow.
> 
> This may also not be the best way of structuring this code. The
> additional indirection of passing this in at a high level makes it hard
> to read and potentially fragile. For single instructions it was ok, but
> I'm not so sure that it's ok for larger sequences like this.

I did the similar thing for arm64 port (ll/sc version) as the current
macros are already complicated and I have no other better idea to
generate definitions for protected and _wrap versions equally.

See below. What are different from Colin's arm port are:
  * control __HARDENED_ATOMIC_CHECK/__CL* directly instead of passing them
    as an argument
  * modify regs->pc directly in a handler to skip "brk" (not shown in this
    hunk) instead of using _ASM_EXTABLE (& fixup_exception())

Anyway, I'm putting off posting my arm64 port while some discussions are
going on against Elena's x86 patch.

Thanks,
-Takahiro AKASHI
===8<===
#define ATOMIC_OP(op, asm_op, wrap, cl)					\
  ...

#define ATOMIC_OP_RETURN(name, mb, acq, rel, op, asm_op, wrap, cl)	\
__LL_SC_INLINE int							\
__LL_SC_PREFIX(atomic_##op##_return##wrap##name(int i, atomic##wrap##_t *v))\
{									\
	unsigned long tmp;						\
	int result;							\
									\
	asm volatile("// atomic_" #op "_return" #name "\n"		\
"	prfm	pstl1strm, %2\n"					\
"1:	ld" #acq "xr	%w0, %2\n"					\
"	" #asm_op "	%w0, %w0, %w3\n"				\
	__HARDENED_ATOMIC_CHECK						\
"	st" #rel "xr	%w1, %w0, %2\n"					\
"	cbnz	%w1, 1b\n"						\
"	" #mb "\n"							\
"4:"									\
	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
	: "Ir" (i)							\
	: cl);								\
									\
	return result;							\
}									\
__LL_SC_EXPORT(atomic_##op##_return##wrap##name);

#define ATOMIC_FETCH_OP(name, mb, acq, rel, op, asm_op, wrap, cl)	\
  ...

#define ATOMIC_OPS(...)							\
	ATOMIC_OP(__VA_ARGS__, __CL)					\
	ATOMIC_OP_RETURN(        , dmb ish,  , l, __VA_ARGS__, __CL_MEM)\
	ATOMIC_OP_RETURN(_relaxed,        ,  ,  , __VA_ARGS__, __CL)	\
	ATOMIC_OP_RETURN(_acquire,        , a,  , __VA_ARGS__, __CL_MEM)\
	ATOMIC_OP_RETURN(_release,        ,  , l, __VA_ARGS__, __CL_MEM)\
	ATOMIC_FETCH_OP (        , dmb ish,  , l, __VA_ARGS__, __CL_MEM)\
	ATOMIC_FETCH_OP (_relaxed,        ,  ,  , __VA_ARGS__, __CL)	\
	ATOMIC_FETCH_OP (_acquire,        , a,  , __VA_ARGS__, __CL_MEM)\
	ATOMIC_FETCH_OP (_release,        ,  , l, __VA_ARGS__, __CL_MEM)

#ifdef CONFIG_HARDENED_ATOMIC
#define __HARDENED_ATOMIC_CHECK						\
"	bvc	3f\n"							\
"2:	brk	" __stringify(BUG_ATOMIC_OVERFLOW_BRK_IMM) "\n"		\
"	b	4f\n"							\
"3:"
#define __CL     "cc"
#define __CL_MEM "cc", "memory"

ATOMIC_OPS(add, adds,      )
ATOMIC_OPS(sub, subs,      )
#else
#define __HARDENED_ATOMIC_CHECK
#define __CL
#define __CL_MEM

ATOMIC_OPS(add,  add,      )
ATOMIC_OPS(sub,  sub,      )
#endif

#undef __HARDENED_ATOMIC_CHECK
#define __HARDENED_ATOMIC_CHECK
#undef __CL
#undef __CL_MEM
#define __CL
#define __CL_MEM "memory"
ATOMIC_OPS(add,  add, _wrap)
ATOMIC_OPS(sub,  sub, _wrap)
===>8===


> > diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> > index 3a2e678..ce8ee00 100644
> > --- a/arch/arm/mm/fault.c
> > +++ b/arch/arm/mm/fault.c
> > @@ -580,6 +580,21 @@ do_PrefetchAbort(unsigned long addr, unsigned int ifsr, struct pt_regs *regs)
> >  	const struct fsr_info *inf = ifsr_info + fsr_fs(ifsr);
> >  	struct siginfo info;
> >  
> > +#ifdef CONFIG_HARDENED_ATOMIC
> > +	if (fsr_fs(ifsr) == FAULT_CODE_DEBUG) {
> 
> You'll need to justify why this isn't in the ifsr_info table. It has the
> same basic shape as the usual set of handlers.
> 
> I note that we don't seem to use SW breakpoints at all currently, and I
> suspect there's a reason for that which we need to consider.
> 
> Also, if this *must* live here, please make it a static inline with an
> empty stub, rather than an ifdef'd block.
> 
> > +		unsigned long pc = instruction_pointer(regs);
> > +		unsigned int bkpt;
> > +
> > +		if (!probe_kernel_address((const unsigned int *)pc, bkpt) &&
> > +		    cpu_to_le32(bkpt) == 0xe12f1073) {
> 
> This appears to be the A1 encoding from the ARM ARM. What about the T1
> encoding, i.e. thumb?
> 
> Regardless, *please* de-magic the number using a #define.
> 
> Also, this should be le32_to_cpu -- in the end we're treating the
> coverted value as cpu-native. The variable itself should be a __le32.
> 
> Thanks,
> Mark.
> 
> > +			current->thread.error_code = ifsr;
> > +			current->thread.trap_no = 0;
> > +			hardened_atomic_overflow(regs);
> > +			fixup_exception(regs);
> > +			return;
> > +		}
> > +	}
> > +#endif
> >  	if (!inf->fn(addr, ifsr | FSR_LNX_PF, regs))
> >  		return;
> >  
> > -- 
> > 2.7.4
> > 

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.