|
|
Message-ID: <CAG_fn=UDyVpZz5=oP4HHdYCB43NnXG1sLypRXopyEk9qgq471A@mail.gmail.com>
Date: Thu, 9 May 2019 18:43:21 +0200
From: Alexander Potapenko <glider@...gle.com>
To: Kees Cook <keescook@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Christoph Lameter <cl@...ux.com>,
Laura Abbott <labbott@...hat.com>, Linux-MM <linux-mm@...ck.org>,
linux-security-module <linux-security-module@...r.kernel.org>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Masahiro Yamada <yamada.masahiro@...ionext.com>, James Morris <jmorris@...ei.org>,
"Serge E. Hallyn" <serge@...lyn.com>, Nick Desaulniers <ndesaulniers@...gle.com>,
Kostya Serebryany <kcc@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>, Sandeep Patil <sspatil@...roid.com>,
Randy Dunlap <rdunlap@...radead.org>, Jann Horn <jannh@...gle.com>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH 1/4] mm: security: introduce init_on_alloc=1 and
init_on_free=1 boot options
From: Kees Cook <keescook@...omium.org>
Date: Wed, May 8, 2019 at 9:02 PM
To: Alexander Potapenko
Cc: Andrew Morton, Christoph Lameter, Kees Cook, Laura Abbott,
Linux-MM, linux-security-module, Kernel Hardening, Masahiro Yamada,
James Morris, Serge E. Hallyn, Nick Desaulniers, Kostya Serebryany,
Dmitry Vyukov, Sandeep Patil, Randy Dunlap, Jann Horn, Mark Rutland
> On Wed, May 8, 2019 at 8:38 AM Alexander Potapenko <glider@...gle.com> wrote:
> > The new options are needed to prevent possible information leaks and
> > make control-flow bugs that depend on uninitialized values more
> > deterministic.
>
> I like having this available on both alloc and free. This makes it
> much more configurable for the end users who can adapt to their work
> loads, etc.
>
> > Linux build with -j12, init_on_free=1: +24.42% sys time (st.err 0.52%)
> > [...]
> > Linux build with -j12, init_on_alloc=1: +0.57% sys time (st.err 0.40%)
>
> Any idea why there is such a massive difference here? This seems to
> high just for cache-locality effects of touching all the freed pages.
I've measured a single `make -j12` again under perf stat.
The numbers for init_on_alloc=1 were:
4936513177 cache-misses # 8.056 % of all
cache refs (44.44%)
61278262461 cache-references
(44.45%)
42844784 page-faults
1449630221347 L1-dcache-loads
(44.45%)
50569965485 L1-dcache-load-misses # 3.49% of all
L1-dcache hits (44.44%)
299987258588 L1-icache-load-misses
(44.44%)
1449857258648 dTLB-loads
(44.45%)
826292490 dTLB-load-misses # 0.06% of all
dTLB cache hits (44.44%)
22028472701 iTLB-loads
(44.44%)
858451905 iTLB-load-misses # 3.90% of all
iTLB cache hits (44.45%)
162.120107145 seconds time elapsed
, and for init_on_free=1:
6666716777 cache-misses # 10.862 % of all
cache refs (44.45%)
61378258434 cache-references
(44.46%)
42850913 page-faults
1449986416063 L1-dcache-loads
(44.45%)
51277338771 L1-dcache-load-misses # 3.54% of all
L1-dcache hits (44.45%)
298295905805 L1-icache-load-misses
(44.44%)
1450378031344 dTLB-loads
(44.43%)
807011341 dTLB-load-misses # 0.06% of all
dTLB cache hits (44.44%)
22044976638 iTLB-loads
(44.44%)
846377845 iTLB-load-misses # 3.84% of all
iTLB cache hits (44.45%)
164.427054893 seconds time elapsed
(note that we don't see the speed difference under perf)
init_on_free=1 causes 1.73B more cache misses than init_on_alloc=1.
If I'm understanding correctly, a cache miss costs 12-14 cycles on my
3GHz Skylake CPU, which can explain explain a 7-8-second difference
between the two modes.
But as I just realized this is both kernel and userspace, so while the
difference is almost correct for wall time (120s for init_on_alloc,
130s for init_on_free) this doesn't tell much about the time spent in
the kernel.
> --
> Kees Cook
--
Alexander Potapenko
Software Engineer
Google Germany GmbH
Erika-Mann-Straße, 33
80636 München
Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.