Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7e7918c6-7186-47f4-4b37-2fc55db1e678@huawei.com>
Date: Tue, 25 Jul 2017 20:03:08 +0800
From: Li Kun <hw.likun@...wei.com>
To: Kees Cook <keescook@...omium.org>, Ingo Molnar <mingo@...nel.org>
CC: Peter Zijlstra <peterz@...radead.org>,
        Josh Poimboeuf
	<jpoimboe@...hat.com>,
        Christoph Hellwig <hch@...radead.org>,
        "Eric W.
 Biederman" <ebiederm@...ssion.com>,
        Andrew Morton
	<akpm@...ux-foundation.org>,
        Jann Horn <jannh@...gle.com>, Eric Biggers
	<ebiggers3@...il.com>,
        Elena Reshetova <elena.reshetova@...el.com>,
        "Hans
 Liljestrand" <ishkamiel@...il.com>,
        Greg KH <gregkh@...uxfoundation.org>,
        Alexey Dobriyan <adobriyan@...il.com>,
        "Serge E. Hallyn" <serge@...lyn.com>, <arozansk@...hat.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Manfred Spraul
	<manfred@...orfullife.com>,
        "axboe@...nel.dk" <axboe@...nel.dk>,
        "James
 Bottomley" <James.Bottomley@...senpartnership.com>,
        "x86@...nel.org"
	<x86@...nel.org>, Arnd Bergmann <arnd@...db.de>,
        "David S. Miller"
	<davem@...emloft.net>,
        Rik van Riel <riel@...hat.com>, <linux-kernel@...r.kernel.org>,
        linux-arch <linux-arch@...r.kernel.org>,
        "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>
Subject: Re: [PATCH v8 3/3] x86/refcount: Implement fast
 refcount overflow protection

Hi Kees,


on 2017/7/25 2:35, Kees Cook wrote:
> +static __always_inline __must_check
> +int __refcount_add_unless(refcount_t *r, int a, int u)
> +{
> +	int c, new;
> +
> +	c = atomic_read(&(r->refs));
> +	do {
> +		if (unlikely(c == u))
> +			break;
> +
> +		asm volatile("addl %2,%0\n\t"
> +			REFCOUNT_CHECK_LT_ZERO
> +			: "=r" (new)
> +			: "0" (c), "ir" (a),
> +			  [counter] "m" (r->refs.counter)
> +			: "cc", "cx");
here when the result LT_ZERO, you will saturate the r->refs.counter and 
make the

atomic_try_cmpxchg(&(r->refs), &c, new) bound to fail first time.

maybe we can just saturate the value of variable "new" ?
  


> +
> +	} while (!atomic_try_cmpxchg(&(r->refs), &c, new));
> +
> +	return c;
> +}
> +

-- 
Best Regards
Li Kun

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.