|
Message-ID: <CAGXu5jLENkemQiL6bcL5=qsesi-2M=1jYhS7SGBpBG64ER2CsA@mail.gmail.com> Date: Wed, 5 Apr 2017 17:14:09 -0700 From: Kees Cook <keescook@...omium.org> To: Andy Lutomirski <luto@...nel.org> Cc: "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>, Mark Rutland <mark.rutland@....com>, Hoeun Ryu <hoeun.ryu@...il.com>, PaX Team <pageexec@...email.hu>, Emese Revfy <re.emese@...il.com>, Russell King <linux@...linux.org.uk>, X86 ML <x86@...nel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org> Subject: Re: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap() On Wed, Apr 5, 2017 at 4:57 PM, Andy Lutomirski <luto@...nel.org> wrote: > On Wed, Mar 29, 2017 at 6:41 PM, Kees Cook <keescook@...omium.org> wrote: >> On Wed, Mar 29, 2017 at 3:38 PM, Andy Lutomirski <luto@...capital.net> wrote: >>> On Wed, Mar 29, 2017 at 11:15 AM, Kees Cook <keescook@...omium.org> wrote: >>>> Based on PaX's x86 pax_{open,close}_kernel() implementation, this >>>> allows HAVE_ARCH_RARE_WRITE to work on x86. >>>> >>> >>>> + >>>> +static __always_inline unsigned long __arch_rare_write_begin(void) >>>> +{ >>>> + unsigned long cr0; >>>> + >>>> + preempt_disable(); >>> >>> This looks wrong. DEBUG_LOCKS_WARN_ON(!irqs_disabled()) would work, >>> as would local_irq_disable(). There's no way that just disabling >>> preemption is enough. >>> >>> (Also, how does this interact with perf nmis?) >> >> Do you mean preempt_disable() isn't strong enough here? I'm open to >> suggestions. The goal would be to make sure nothing between _begin and >> _end would get executed without interruption... >> > > Sorry for the very slow response. > > preempt_disable() isn't strong enough to prevent interrupts, and an > interrupt here would run with WP off, causing unknown havoc. I tend > to think that the caller should be responsible for turning off > interrupts. So, something like: Top-level functions: static __always_inline rare_write_begin(void) { preempt_disable(); local_irq_disable(); barrier(); __arch_rare_write_begin(); barrier(); } static __always_inline rare_write_end(void) { barrier(); __arch_rare_write_end(); barrier(); local_irq_enable(); preempt_enable_no_resched(); } x86-specific helpers: static __always_inline unsigned long __arch_rare_write_begin(void) { unsigned long cr0; cr0 = read_cr0() ^ X86_CR0_WP; BUG_ON(cr0 & X86_CR0_WP); write_cr0(cr0); return cr0 ^ X86_CR0_WP; } static __always_inline unsigned long __arch_rare_write_end(void) { unsigned long cr0; cr0 = read_cr0() ^ X86_CR0_WP; BUG_ON(!(cr0 & X86_CR0_WP)); write_cr0(cr0); return cr0 ^ X86_CR0_WP; } I can give it a spin... -Kees -- Kees Cook Pixel Security
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.