Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFJ0LnENnrpVA_SdngGxeShsmxq9Mvc0h9EH1=8vEP=hFFnt1g@mail.gmail.com>
Date: Tue, 26 Jul 2016 13:41:09 -0700
From: Nick Kralevich <nnk@...gle.com>
To: "Roberts, William C" <william.c.roberts@...el.com>
Cc: jason@...edaemon.net, linux-mm@...r.kernel.org, 
	lkml <linux-kernel@...r.kernel.org>, kernel-hardening@...ts.openwall.com, 
	Andrew Morton <akpm@...ux-foundation.org>, Kees Cook <keescook@...omium.org>, 
	Greg KH <gregkh@...uxfoundation.org>, Jeffrey Vander Stoep <jeffv@...gle.com>, salyzyn@...roid.com, 
	Daniel Cashman <dcashman@...roid.com>
Subject: Re: [PATCH] [RFC] Introduce mmap randomization

My apologies in advance if I misunderstand the purposes of this patch.

IIUC, this patch adds a random gap between various mmap() mappings,
with the goal of ensuring that both the mmap base address and gaps
between pages are randomized.

If that's the goal, please note that this behavior has caused
significant performance problems to Android in the past. Specifically,
random gaps between mmap()ed regions causes memory space
fragmentation. After a program runs for a long time, the ability to
find large contiguous blocks of memory becomes impossible, and mmap()s
fail due to lack of a large enough address space.

This isn't just a theoretical concern. Android actually hit this on
kernels prior to
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa466780a754154531b44c2086f6618cee3a8
. Before that patch, the gaps between mmap()ed pages were randomized.
See the discussion at:

  http://lists.infradead.org/pipermail/linux-arm-kernel/2011-November/073082.html
  http://marc.info/?t=132070957400005&r=1&w=2

We ended up having to work around this problem in the following commits:

  https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f592d56caab5e2a259c
  https://android.googlesource.com/platform/art/+/51e5386
  https://android.googlesource.com/platform/art/+/f94b781

If this behavior was re-introduced, it's likely to cause
hard-to-reproduce problems, and I suspect Android based distributions
would tend to disable this feature either globally, or for
applications which make a large number of mmap() calls.

-- Nick



On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@...el.com> wrote:
> From: William Roberts <william.c.roberts@...el.com>
>
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
>
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
>
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
>
> Note no gap between.
>
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
>
> Note gap between.
>
> Using the test program mentioned here, that allocates fixed sized blocks
> till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> no difference was noticed in the number of allocations. Most varied from
> run to run, but were always within a few allocations of one another
> between patched and un-patched runs.
>
> Performance Measurements:
> Using strace with -T option and filtering for mmap on the program
> ls shows a slowdown of approximate 3.7%
>
> Signed-off-by: William Roberts <william.c.roberts@...el.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de2c176..7891272 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -43,6 +43,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/moduleparam.h>
>  #include <linux/pkeys.h>
> +#include <linux/random.h>
>
>  #include <asm/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -1582,6 +1583,24 @@ unacct_error:
>         return error;
>  }
>
> +/*
> + * Generate a random address within a range. This differs from randomize_addr() by randomizing
> + * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
> + */
> +static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
> +{
> +       unsigned long slots;
> +
> +       if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
> +               return 0;
> +
> +       slots = (end - start)/len;
> +       if (!slots)
> +               return 0;
> +
> +       return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
> +}
> +
>  unsigned long unmapped_area(struct vm_unmapped_area_info *info)
>  {
>         /*
> @@ -1676,6 +1695,8 @@ found:
>         if (gap_start < info->low_limit)
>                 gap_start = info->low_limit;
>
> +       gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
> +
>         /* Adjust gap address to the desired alignment */
>         gap_start += (info->align_offset - gap_start) & info->align_mask;
>
> @@ -1775,6 +1796,9 @@ found:
>  found_highest:
>         /* Compute highest gap address at the desired alignment */
>         gap_end -= info->length;
> +
> +       gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> +
>         gap_end -= (gap_end - info->align_offset) & info->align_mask;
>
>         VM_BUG_ON(gap_end < info->low_limit);
> --
> 1.9.1
>



-- 
Nick Kralevich | Android Security | nnk@...gle.com | 650.214.4037

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.