Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160114185701.GA28181@leverpostej>
Date: Thu, 14 Jan 2016 18:57:05 +0000
From: Mark Rutland <mark.rutland@....com>
To: Ard Biesheuvel <ard.biesheuvel@...aro.org>
Cc: Kees Cook <keescook@...omium.org>, Arnd Bergmann <arnd@...db.de>,
	kernel-hardening@...ts.openwall.com,
	Sharma Bhupesh <bhupesh.sharma@...escale.com>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will.deacon@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Leif Lindholm <leif.lindholm@...aro.org>,
	Stuart Yoder <stuart.yoder@...escale.com>,
	Marc Zyngier <marc.zyngier@....com>,
	Christoffer Dall <christoffer.dall@...aro.org>,
	"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v3 07/21] arm64: move kernel image to base of vmalloc area

On Wed, Jan 13, 2016 at 01:51:10PM +0000, Mark Rutland wrote:
> On Wed, Jan 13, 2016 at 09:39:41AM +0100, Ard Biesheuvel wrote:
> > On 12 January 2016 at 19:14, Mark Rutland <mark.rutland@....com> wrote:
> > > On Mon, Jan 11, 2016 at 02:19:00PM +0100, Ard Biesheuvel wrote:
> > >>  void __init kasan_init(void)
> > >>  {
> > >> +     u64 kimg_shadow_start, kimg_shadow_end;
> > >>       struct memblock_region *reg;
> > >>
> > >> +     kimg_shadow_start = round_down((u64)kasan_mem_to_shadow(_text),
> > >> +                                    SWAPPER_BLOCK_SIZE);
> > >> +     kimg_shadow_end = round_up((u64)kasan_mem_to_shadow(_end),
> > >> +                                SWAPPER_BLOCK_SIZE);
> > >
> > > This rounding looks suspect to me, given it's applied to the shadow
> > > addresses rather than the kimage addresses. That's roughly equivalent to
> > > kasan_mem_to_shadow(round_up(_end, 8 * SWAPPER_BLOCK_SIZE).
> > >
> > > I don't think we need any rounding for the kimage addresses. The image
> > > end is page-granular (and the fine-grained mapping will reflect that).
> > > Any accesses between _end and roud_up(_end, SWAPPER_BLOCK_SIZE) would be
> > > bugs (and would most likely fault) regardless of KASAN.
> > >
> > > Or am I just being thick here?
> > >
> > 
> > Well, the problem here is that vmemmap_populate() is used as a
> > surrogate vmalloc() since that is not available yet, and
> > vmemmap_populate() allocates in SWAPPER_BLOCK_SIZE granularity.

>From a look at the git history, and a chat with Catalin, it sounds like
the SWAPPER_BLOCK_SIZE granularity is a historical artifact. It happened
to be easier to implement it that way at some point in the past, but
there's no reason the 4K/16K/64K cases can't all be handled by the same
code that would go down to PAGE_SIZE granularity, using sections if
possible.

I'll drop that on the TODO list.

> > If I remove the rounding, I get false positive kasan errors which I
> > have not quite diagnosed yet, but are probably due to the fact that
> > the rounding performed by vmemmap_populate() goes in the wrong
> > direction.

As far as I can see, it implicitly rounds the base down and end up to
SWAPPER_BLOCK_SIZE granularity.

I can see that it might map too much memory, but I can't see why that
should trigger KASAN failures. Regardless of what was mapped KASAN
should stick to the region it cares about, and everything else should
stay out of that.

When do you see the failures, and are they in any way consistent?

Do you have an example to hand?

> I'll also take a peek.

I haven't managed to trigger KASAN failures with the rounding removed.
I'm using 4K pages, and running under KVM tool (no EFI, so the memory
map is a contiguous block).

What does your memory map look like?

Thanks,
Mark.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.