Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jLZw1aK_wqSiAi3ddxPavOC1i47q4ahnwVnja=gsrLs2w@mail.gmail.com>
Date: Mon, 31 Jul 2017 14:16:21 -0700
From: Kees Cook <keescook@...omium.org>
To: Ard Biesheuvel <ard.biesheuvel@...aro.org>
Cc: "linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>, 
	"kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>, Will Deacon <will.deacon@....com>, 
	Mark Rutland <mark.rutland@....com>, Laura Abbott <labbott@...oraproject.org>, 
	Li Kun <hw.likun@...wei.com>
Subject: Re: [PATCH v4] arm64: kernel: implement fast refcount checking

On Mon, Jul 31, 2017 at 12:22 PM, Ard Biesheuvel
<ard.biesheuvel@...aro.org> wrote:
> v4: Implement add-from-zero checking using a conditional compare rather than
>     a conditional branch, which I omitted from v3 due to the 10% performance
>     hit: this will result in the new refcount to be written back to memory
>     before invoking the handler, which is more in line with the other checks,
>     and is apparently much easier on the branch predictor, given that there
>     is no performance hit whatsoever.

So refcount_inc() and refcount_add(n, ...) will write 1 and n
respectively, then hit the handler to saturate? That seems entirely
fine to me: checking inc-from-zero is just a protection against a
possible double-free condition. It's still technically a race, but a
narrow race on a rare condition is better than being able to always
win it.

Nice!

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.