Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56A682B5.8000603@intel.com>
Date: Mon, 25 Jan 2016 12:16:53 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: kernel-hardening@...ts.openwall.com,
 Andrew Morton <akpm@...ux-foundation.org>,
 "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
 Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...e.com>
Cc: Laura Abbott <labbott@...oraproject.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, Kees Cook <keescook@...omium.org>
Subject: Re: [RFC][PATCH 3/3] mm/page_poisoning.c: Allow
 for zero poisoning

Thanks for doing this!  It all looks pretty straightforward.

On 01/25/2016 08:55 AM, Laura Abbott wrote:
> By default, page poisoning uses a poison value (0xaa) on free. If this
> is changed to 0, the page is not only sanitized but zeroing on alloc
> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
> corruption from the poisoning is harder to detect. This feature also
> cannot be used with hibernation since pages are not guaranteed to be
> zeroed after hibernation.

Ugh, that's a good point about hibernation.  I'm not sure how widely it
gets used but it does look pretty widely enabled in distribution kernels.

Is this something that's fixable?  It seems like we could have the
hibernation code run through and zero all the free lists.  Or, we could
just disable the optimization at runtime when a hibernation is done.

Not that we _have_ to do any of this now, but if a runtime knob (like a
sysctl) could be fun too.  I would be nice for folks to turn it on and
off if they wanted the added security of "real" poisoning vs. the
potential performance boost from this optimization.

> +static inline bool should_zero(void)
> +{
> +	return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
> +		!page_poisoning_enabled();
> +}

I wonder if calling this "free_pages_prezeroed()" would make things a
bit more clear when we use it in prep_new_page().

>  static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>  								int alloc_flags)
>  {
> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>  	kernel_map_pages(page, 1 << order, 1);
>  	kasan_alloc_pages(page, order);
>  
> -	if (gfp_flags & __GFP_ZERO)
> +	if (should_zero() && gfp_flags & __GFP_ZERO)
>  		for (i = 0; i < (1 << order); i++)
>  			clear_highpage(page + i);

It's probably also worth pointing out that this can be a really nice
feature to have in virtual machines where memory is being deduplicated.
 As it stands now, the free lists end up with gunk in them and tend not
to be easy to deduplicate.  This patch would fix that.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.