Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190227104407.GA18804@openwall.com>
Date: Wed, 27 Feb 2019 11:44:07 +0100
From: Solar Designer <solar@...nwall.com>
To: Kees Cook <keescook@...omium.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>, Jann Horn <jannh@...gle.com>,
	Sean Christopherson <sean.j.christopherson@...el.com>,
	Dominik Brodowski <linux@...inikbrodowski.net>,
	Kernel Hardening <kernel-hardening@...ts.openwall.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] x86/asm: Pin sensitive CR0 bits

On Tue, Feb 26, 2019 at 03:36:45PM -0800, Kees Cook wrote:
>  static inline void native_write_cr0(unsigned long val)
>  {
> -	asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
> +	bool warn = false;
> +
> +again:
> +	val |= X86_CR0_WP;
> +	/*
> +	 * In order to have the compiler not optimize away the check
> +	 * in the WARN_ONCE(), mark "val" as being also an output ("+r")

This comment is now slightly out of date: the check is no longer "in the
WARN_ONCE()".  Ditto about the comment for CR4.

> +	 * by this asm() block so it will perform an explicit check, as
> +	 * if it were "volatile".
> +	 */
> +	asm volatile("mov %0,%%cr0": "+r" (val) : "m" (__force_order) : );
> +	/*
> +	 * If the MOV above was used directly as a ROP gadget we can
> +	 * notice the lack of pinned bits in "val" and start the function
> +	 * from the beginning to gain the WP bit for sure. And do it
> +	 * without first taking the exception for a WARN().
> +	 */
> +	if ((val & X86_CR0_WP) != X86_CR0_WP) {
> +		warn = true;
> +		goto again;
> +	}
> +	WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
>  }

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.