Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2236FBA76BA1254E88B949DDB74E612B41BF8DFB@IRSMSX102.ger.corp.intel.com>
Date: Wed, 26 Oct 2016 11:15:37 +0000
From: "Reshetova, Elena" <elena.reshetova@...el.com>
To: David Windsor <dwindsor@...il.com>, AKASHI Takahiro
	<takahiro.akashi@...aro.org>
CC: "kernel-hardening@...ts.openwall.com"
	<kernel-hardening@...ts.openwall.com>, Kees Cook <keescook@...omium.org>,
	Hans Liljestrand <ishkamiel@...il.com>
Subject: RE: [RFC v2 PATCH 12/13] x86: implementation for
 HARDENED_ATOMIC

<snip>

Btw, hope no one minds when I cut out irrelevant parts out of conversation with snips like above. 
Otherwise I personally find taking more time to scroll down to find right places and risk to miss smth is higher.  

>> +#ifdef CONFIG_HARDENED_ATOMIC
>> +static __always_inline bool atomic_add_negative_wrap(int i, atomic_wrap_t *v)
>> +{
>> +     GEN_BINARY_RMWcc_wrap(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
>> +}
>> +#endif /* CONFIG_HARDENED_ATOMIC */
>> +
>>  /**
>>   * atomic_add_return - add integer and return
>>   * @i: integer value to add
>> @@ -153,6 +322,18 @@ static __always_inline bool atomic_add_negative(int i, atomic_t *v)
>>   */
>>  static __always_inline int atomic_add_return(int i, atomic_t *v)
>>  {
>> +     return i + xadd_check_overflow(&v->counter, i);
>> +}
>
> If overflow, should this function still return i + v->counter?
> (The caller would die anyway, though.)
>

>Yes, because in the non-overflow case, xadd_check_overflow() would
>return the previous value of v->counter.  This gets added to i and
>returned, which is correct and guaranteed to not result in an overflow
>(if it did, the checks in xadd_check_overflow() would kill the
>process, as you noted).

>In the overflow case, the caller gets killed anyway: before
>xadd_check_overflow() can return, do_trap() calls
>hardened_atomic_overflow() which calls BUG(), so the return statement
>won't finish executing.

>One thing to note about the pattern of using i +
>xadd_check_overflow(): there's a potential TOCTOU issue if i can be
>modified after xadd_check_overflow() returns, but before the
>expression (i + xadd_check_overflow()) is evaluated.  In areas where i
>is shared between threads, we might want to make (i +
>xadd_check_overflow()) a critical section.

How should we mark critical section here? 

>>  #define atomic_dec_return(v)  (atomic_sub_return(1, v))
>> +#ifdef CONFIG_HARDENED_ATOMIC
>> +static __always_inline int atomic_dec_return_wrap(atomic_wrap_t *v)
>> +{
>> +     return atomic_sub_return_wrap(1, v);
>> +}
>> +#endif /* CONFIG_HARDENED_ATOMIC */
>>
>>  static __always_inline int atomic_fetch_add(int i, atomic_t *v)
>>  {
>
> and atomic_fetch_add/sub() should do
>
>         return xadd_check_overflow((+/-)i, v);

We don't have indeed wrap versions defined here, so basic ones were left unprotected. 
Fixed now. Thank you for noticing!

>> + * Atomically adds @a to @v, so long as @v was not already @u.
>> + * Returns the old value of @v.
>> + */
>> +static __always_inline int __atomic_add_unless_wrap(atomic_wrap_t *v,
>> +                                                 int a, int u)
>> +{
>> +     int c, old, new;
>> +     c = atomic_read_wrap(v);
>> +     for (;;) {
>> +             if (unlikely(c == (u)))
>> +                     break;
>> +
>> +             asm volatile("addl %2,%0\n"
>> +
>> +#ifdef CONFIG_HARDENED_ATOMIC
>> +                          "jno 0f\n"
>> +                          "subl %2,%0\n"
>> +                          "int $4\n0:\n"
>> +                          _ASM_EXTABLE(0b, 0b)
>> +#endif
>
> Is this a mistake? We don't need a check here.
>

>Yes, this appears to be a mistake.

Clear copy paste mistake. Fixed now. Thanks again!

>>   * Other variants with different arithmetic operators:
>>   */
>> @@ -149,6 +245,14 @@ static inline long long atomic64_sub_return(long long i, atomic64_t *v)
>>       return i;
>>  }
>>
>> +static inline long long atomic64_sub_return_wrap(long long i, atomic64_wrap_t *v)
>> +{
>> +     alternative_atomic64(sub_return,
>
> sub_return_wrap?

Yes, thank you again! I thought I caugth them all last time, but no, this one still got in :(

Best Regards,
Elena.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.