Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5j+QU7OXWA_oMaLUsDnS4yyfNx8-1f_ywHX=o88kqE7=JQ@mail.gmail.com>
Date: Fri, 21 Jul 2017 20:33:17 -0700
From: Kees Cook <keescook@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>, 
	Josh Poimboeuf <jpoimboe@...hat.com>, Christoph Hellwig <hch@...radead.org>, 
	"Eric W. Biederman" <ebiederm@...ssion.com>, Jann Horn <jannh@...gle.com>, 
	Eric Biggers <ebiggers3@...il.com>, Elena Reshetova <elena.reshetova@...el.com>, 
	Hans Liljestrand <ishkamiel@...il.com>, Greg KH <gregkh@...uxfoundation.org>, 
	Alexey Dobriyan <adobriyan@...il.com>, "Serge E. Hallyn" <serge@...lyn.com>, arozansk@...hat.com, 
	Davidlohr Bueso <dave@...olabs.net>, Manfred Spraul <manfred@...orfullife.com>, 
	"axboe@...nel.dk" <axboe@...nel.dk>, James Bottomley <James.Bottomley@...senpartnership.com>, 
	"x86@...nel.org" <x86@...nel.org>, Arnd Bergmann <arnd@...db.de>, "David S. Miller" <davem@...emloft.net>, 
	Rik van Riel <riel@...hat.com>, LKML <linux-kernel@...r.kernel.org>, 
	linux-arch <linux-arch@...r.kernel.org>, 
	"kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>
Subject: Re: [PATCH v6 0/2] x86: Implement fast refcount overflow protection

On Fri, Jul 21, 2017 at 2:22 PM, Andrew Morton
<akpm@...ux-foundation.org> wrote:
> On Thu, 20 Jul 2017 11:11:06 +0200 Ingo Molnar <mingo@...nel.org> wrote:
>
>>
>> * Kees Cook <keescook@...omium.org> wrote:
>>
>> > This implements refcount_t overflow protection on x86 without a noticeable
>> > performance impact, though without the fuller checking of REFCOUNT_FULL.
>> > This is done by duplicating the existing atomic_t refcount implementation
>> > but with normally a single instruction added to detect if the refcount
>> > has gone negative (i.e. wrapped past INT_MAX or below zero). When
>> > detected, the handler saturates the refcount_t to INT_MIN / 2. With this
>> > overflow protection, the erroneous reference release that would follow
>> > a wrap back to zero is blocked from happening, avoiding the class of
>> > refcount-over-increment use-after-free vulnerabilities entirely.
>> >
>> > Only the overflow case of refcounting can be perfectly protected, since it
>> > can be detected and stopped before the reference is freed and left to be
>> > abused by an attacker. This implementation also notices some of the "dec
>> > to 0 without test", and "below 0" cases. However, these only indicate that
>> > a use-after-free may have already happened. Such notifications are likely
>> > avoidable by an attacker that has already exploited a use-after-free
>> > vulnerability, but it's better to have them than allow such conditions to
>> > remain universally silent.
>> >
>> > On first overflow detection, the refcount value is reset to INT_MIN / 2
>> > (which serves as a saturation value), the offending process is killed,
>> > and a report and stack trace are produced. When operations detect only
>> > negative value results (such as changing an already saturated value),
>> > saturation still happens but no notification is performed (since the
>> > value was already saturated).
>> >
>> > On the matter of races, since the entire range beyond INT_MAX but before
>> > 0 is negative, every operation at INT_MIN / 2 will trap, leaving no
>> > overflow-only race condition.
>> >
>> > As for performance, this implementation adds a single "js" instruction
>> > to the regular execution flow of a copy of the standard atomic_t refcount
>> > operations. (The non-"and_test" refcount_dec() function, which is uncommon
>> > in regular refcount design patterns, has an additional "jz" instruction
>> > to detect reaching exactly zero.) Since this is a forward jump, it is by
>> > default the non-predicted path, which will be reinforced by dynamic branch
>> > prediction. The result is this protection having virtually no measurable
>> > change in performance over standard atomic_t operations. The error path,
>> > located in .text.unlikely, saves the refcount location and then uses UD0
>> > to fire a refcount exception handler, which resets the refcount, handles
>> > reporting, and returns to regular execution. This keeps the changes to
>> > .text size minimal, avoiding return jumps and open-coded calls to the
>> > error reporting routine.
>>
>> Pretty nice!
>>
>
> Yes, this is a relief.
>
> Do we have a feeling for how feasible/difficult it will be for other
> architectures to implement such a thing?

The PaX atomic_t overflow protection this is heavily based on was
ported to a number of architectures (arm, powerpc, mips, sparc), so I
suspect it shouldn't be too hard to adapt those for the more narrow
refcount_t protection:
https://forums.grsecurity.net/viewtopic.php?f=7&t=4173

And an arm64 port of the fast refcount_t protection is already happening too.

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.