Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2236FBA76BA1254E88B949DDB74E612B41BDAC5B@IRSMSX102.ger.corp.intel.com>
Date: Tue, 4 Oct 2016 07:15:56 +0000
From: "Reshetova, Elena" <elena.reshetova@...el.com>
To: Jann Horn <jann@...jh.net>
CC: "kernel-hardening@...ts.openwall.com"
	<kernel-hardening@...ts.openwall.com>, "keescook@...omium.org"
	<keescook@...omium.org>, Hans Liljestrand <ishkamiel@...il.com>, "David
 Windsor" <dwindsor@...il.com>
Subject: RE: [RFC PATCH 12/13] x86: x86 implementation
 for HARDENED_ATOMIC


On Mon, Oct 03, 2016 at 09:41:25AM +0300, Elena Reshetova wrote:
> This adds x86-specific code in order to support HARDENED_ATOMIC 
> feature. When overflow is detected in atomic_t or atomic_long_t types, 
> the counter is decremented back by one (to keep it at INT_MAX or
> LONG_MAX) and issue is reported using BUG().
> The side effect is that in both legitimate and non-legitimate cases a 
> counter cannot wrap.
> 
> Signed-off-by: Elena Reshetova <elena.reshetova@...el.com>
> Signed-off-by: Hans Liljestrand <ishkamiel@...il.com>
> Signed-off-by: David Windsor <dwindsor@...il.com>
> ---
[...]
>  static __always_inline void atomic_add(int i, atomic_t *v)  {
> -	asm volatile(LOCK_PREFIX "addl %1,%0"
> +	asm volatile(LOCK_PREFIX "addl %1,%0\n"
> +
> +#ifdef CONFIG_HARDENED_ATOMIC
> +		     "jno 0f\n"
> +		     LOCK_PREFIX "subl %1,%0\n"
> +		     "int $4\n0:\n"
> +		     _ASM_EXTABLE(0b, 0b)
> +#endif
> +
> +		     : "+m" (v->counter)
> +		     : "ir" (i));
> +}

>It might make sense to point out in the Kconfig entry that on X86, this can only be relied on if
>kernel.panic_on_oops==1 because otherwise, you can (depending on the bug, in a worst-case scenario) get past 0x7fffffff within seconds using multiple racing processes.
>(See https://bugs.chromium.org/p/project-zero/issues/detail?id=856 .)

I will reference this discussion if we stick with the current approach. Maybe after performance measurements we can stick to the atomic_add_unless version and then eliminate the issue. 

>An additional idea for future development:

>One way to work around that would be to interpret the stored value 2^30 as zero, and interpret other values accordingly. Like this:

>#define SIGNED_ATOMIC_BASE 0x40000000U

>static __always_inline int atomic_read(const atomic_t *v) {
  return READ_ONCE((v)->counter) - SIGNED_ATOMIC_BASE; }

>static __always_inline void atomic_set(atomic_t *v, int i) {
  WRITE_ONCE(v->counter, i + SIGNED_ATOMIC_BASE); }

>static __always_inline int atomic_add_return(int i, atomic_t *v) {
  return i + xadd_check_overflow(&v->counter, i) - SIGNED_ATOMIC_BASE; }

>With this change, atomic_t could still be used as a signed integer with half the range of an int, but its stored value would only become negative on overflow. Then, the "jno" instruction in the hardening code could be replaced with "jns" to reliably block overflows.

>The downsides of this approach would be:
> - One extra increment or decrement every time an atomic_t is read
   or written. This should be relatively cheap - it should be
   operating on a register -, but it's still not ideal. atomic_t
   users could perhaps opt out with something like
   atomic_unsigned_t.
 - Implicit atomic_t initialization to zero by zeroing memory
   would stop working. This would probably be the biggest issue
   with this approach.

I am not sure the BIAS is a good idea at all. Makes things much more complicated, potentially impacts performance...

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.