Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4efbcefe-78fb-b7db-affd-ad86f9e9b0ee@iogearbox.net>
Date: Thu, 21 Jun 2018 23:23:11 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
 "jannh@...gle.com" <jannh@...gle.com>,
 "keescook@...omium.org" <keescook@...omium.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "Van De Ven, Arjan" <arjan.van.de.ven@...el.com>,
 "tglx@...utronix.de" <tglx@...utronix.de>,
 "linux-mm@...ck.org" <linux-mm@...ck.org>, "x86@...nel.org"
 <x86@...nel.org>, "Accardi, Kristen C" <kristen.c.accardi@...el.com>,
 "hpa@...or.com" <hpa@...or.com>, "mingo@...hat.com" <mingo@...hat.com>,
 "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>,
 "Hansen, Dave" <dave.hansen@...el.com>
Subject: Re: [PATCH 0/3] KASLR feature to randomize each loadable module

On 06/21/2018 08:59 PM, Edgecombe, Rick P wrote:
> On Thu, 2018-06-21 at 15:37 +0200, Jann Horn wrote:
>> On Thu, Jun 21, 2018 at 12:34 AM Kees Cook <keescook@...omium.org>
>> wrote:
>>> And most systems have <200 modules, really. I have 113 on a desktop
>>> right now, 63 on a server. So this looks like a trivial win.
>> But note that the eBPF JIT also uses module_alloc(). Every time a BPF
>> program (this includes seccomp filters!) is JIT-compiled by the
>> kernel, another module_alloc() allocation is made. For example, on my
>> desktop machine, I have a bunch of seccomp-sandboxed processes thanks
>> to Chrome. If I enable the net.core.bpf_jit_enable sysctl and open a
>> few Chrome tabs, BPF JIT allocations start showing up between
>> modules:
>>
>> # grep -C1 bpf_jit_binary_alloc /proc/vmallocinfo | cut -d' ' -f 2-
>>   20480 load_module+0x1326/0x2ab0 pages=4 vmalloc N0=4
>>   12288 bpf_jit_binary_alloc+0x32/0x90 pages=2 vmalloc N0=2
>>   20480 load_module+0x1326/0x2ab0 pages=4 vmalloc N0=4
>> --
>>   20480 load_module+0x1326/0x2ab0 pages=4 vmalloc N0=4
>>   12288 bpf_jit_binary_alloc+0x32/0x90 pages=2 vmalloc N0=2
>>   36864 load_module+0x1326/0x2ab0 pages=8 vmalloc N0=8
>> --
>>   20480 load_module+0x1326/0x2ab0 pages=4 vmalloc N0=4
>>   12288 bpf_jit_binary_alloc+0x32/0x90 pages=2 vmalloc N0=2
>>   40960 load_module+0x1326/0x2ab0 pages=9 vmalloc N0=9
>> --
>>   20480 load_module+0x1326/0x2ab0 pages=4 vmalloc N0=4
>>   12288 bpf_jit_binary_alloc+0x32/0x90 pages=2 vmalloc N0=2
>>  253952 load_module+0x1326/0x2ab0 pages=61 vmalloc N0=61
>>
>> If you use Chrome with Site Isolation, you have a few dozen open
>> tabs,
>> and the BPF JIT is enabled, reaching a few hundred allocations might
>> not be that hard.
>>
>> Also: What's the impact on memory usage? Is this going to increase
>> the
>> number of pagetables that need to be allocated by the kernel per
>> module_alloc() by 4K or 8K or so?
> Thanks, it seems it might require some extra memory.  I'll look into it
> to find out exactly how much.
> 
> I didn't include eBFP modules in the randomization estimates, but it
> looks like they are usually smaller than a page.  So with the slight
> leap that the larger normal modules based estimate is the worst case,
> you should still get ~800 modules at 18 bits. After that it will start
> to go down to 10 bits and so in either case it at least won't regress
> the randomness of the existing algorithm.

Assume typically complex (real) programs at around 2.5k BPF insns today.
In our case it's max a handful per net device, thus approx per netns (veth)
which can be few hundreds. Worst case is 4k that BPF allows and then JITs.
There's a BPF kselftest suite you could also run to check on worst case
upper bounds.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.