|
Message-ID: <CAKv+Gu-TH6qeotkYF+w+KcVJ6pavOYYEkmLKEQTUiDVAcQ1REQ@mail.gmail.com> Date: Mon, 31 Jul 2017 22:21:22 +0100 From: Ard Biesheuvel <ard.biesheuvel@...aro.org> To: Kees Cook <keescook@...omium.org> Cc: "linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>, "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>, Will Deacon <will.deacon@....com>, Mark Rutland <mark.rutland@....com>, Laura Abbott <labbott@...oraproject.org>, Li Kun <hw.likun@...wei.com> Subject: Re: [PATCH v4] arm64: kernel: implement fast refcount checking On 31 July 2017 at 22:16, Kees Cook <keescook@...omium.org> wrote: > On Mon, Jul 31, 2017 at 12:22 PM, Ard Biesheuvel > <ard.biesheuvel@...aro.org> wrote: >> v4: Implement add-from-zero checking using a conditional compare rather than >> a conditional branch, which I omitted from v3 due to the 10% performance >> hit: this will result in the new refcount to be written back to memory >> before invoking the handler, which is more in line with the other checks, >> and is apparently much easier on the branch predictor, given that there >> is no performance hit whatsoever. > > So refcount_inc() and refcount_add(n, ...) will write 1 and n > respectively, then hit the handler to saturate? Yes, but this is essentially what occurs on overflow and sub-to-zero as well: the result is always stored before hitting the handler. Isn't this the case for x86 as well? > That seems entirely > fine to me: checking inc-from-zero is just a protection against a > possible double-free condition. It's still technically a race, but a > narrow race on a rare condition is better than being able to always > win it. > Indeed. > Nice! > Thanks!
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.