|
Message-ID: <b3c2f9b4-8cf2-6290-ad2b-2acbbb3b28cd@huawei.com> Date: Wed, 26 Jul 2017 11:32:55 +0800 From: Li Kun <hw.likun@...wei.com> To: Ard Biesheuvel <ard.biesheuvel@...aro.org> CC: <linux-arm-kernel@...ts.infradead.org>, <kernel-hardening@...ts.openwall.com>, <will.deacon@....com>, <keescook@...omium.org>, <mark.rutland@....com>, <labbott@...oraproject.org> Subject: Re: [PATCH v2] arm64: kernel: implement fast refcount checking Hi Ard, on 2017/7/26 2:15, Ard Biesheuvel wrote: > +static __always_inline __must_check int __refcount_add_unless(refcount_t *r, > + int a, int u) > +{ > + int c, new; > + > + c = atomic_read(&(r->refs)); > + do { > + if (unlikely(c == u)) > + break; > + > + asm volatile( > + "adds %0, %0, %2 ;" > + REFCOUNT_CHECK(lt) > + : "=r" (new) > + : "0" (c), "Ir" (a), > + [counter] "r" (&r->refs.counter), > + [brk_imm] "i" (REFCOUNT_BRK_IMM) > + : "cc", "x16", "x17"); > + > + } while (!atomic_try_cmpxchg(&(r->refs), &c, new)); > + > + return c; > +} > + As my reply to Kee's v8 patch, the logic here is a little incorrect, if we saturate the r->refs.counter and return from the exception, the value of "new" is not changed and will be written back to the refcount. I implement the logic like this, maybe you can take some reference if you find it usefull. int __refcount_add_unless(refcount_t *r, int a, int u) { int c; volatile int new; c = atomic_read(&(r->refs)); do { if (unlikely(c == u)) break; asm volatile("adds %w0,%w1,%w2\n" : "=&r" (new) : "r" (c), "Ir" (a) : "memory"); asm volatile(REFCOUNT_CHECK_LT_ZERO ::[imm] "i"(REFCOUNT_BRK_IMM),[counter] "r" (&new) : "cc", "x19"); } while (!atomic_try_cmpxchg(&(r->refs), &c, new)); return c; } > /* This registration must happen early, before debug_traps_init(). */ > void __init trap_init(void) > { > register_break_hook(&bug_break_hook); > + register_break_hook(&refcount_break_hook); > } > diff --git a/arch/arm64/lib/atomic_ll_sc.c b/arch/arm64/lib/atomic_ll_sc.c > index b0c538b0da28..5f038abdc635 100644 > --- a/arch/arm64/lib/atomic_ll_sc.c > +++ b/arch/arm64/lib/atomic_ll_sc.c > @@ -1,3 +1,9 @@ > #include <asm/atomic.h> > #define __ARM64_IN_ATOMIC_IMPL > +#undef REFCOUNT_CHECK > +#undef REFCOUNT_INPUTS > +#undef REFCOUNT_CLOBBERS > +#define REFCOUNT_CHECK(cond) > +#define REFCOUNT_INPUTS(r) > +#define REFCOUNT_CLOBBERS : "cc" > #include <asm/atomic_ll_sc.h> -- Best Regards Li Kun
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.