Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <FD3482AC-3FB0-41DE-9347-5BD7C3DE8B11@amacapital.net>
Date: Wed, 12 Jun 2019 13:56:31 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Marius Hillenbrand <mhillenb@...zon.de>, kvm@...r.kernel.org,
 linux-kernel@...r.kernel.org, kernel-hardening@...ts.openwall.com,
 linux-mm@...ck.org, Alexander Graf <graf@...zon.de>,
 David Woodhouse <dwmw@...zon.co.uk>,
 the arch/x86 maintainers <x86@...nel.org>,
 Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC 00/10] Process-local memory allocations for hiding KVM secrets



> On Jun 12, 2019, at 1:41 PM, Dave Hansen <dave.hansen@...el.com> wrote:
> 
> On 6/12/19 1:27 PM, Andy Lutomirski wrote:
>>> We've discussed having per-cpu page tables where a given PGD is
>>> only in use from one CPU at a time.  I *think* this scheme still
>>> works in such a case, it just adds one more PGD entry that would
>>> have to context-switched.
>> Fair warning: Linus is on record as absolutely hating this idea. He
>> might change his mind, but it’s an uphill battle.
> 
> Just to be clear, are you referring to the per-cpu PGDs, or to this
> patch set with a per-mm kernel area?

per-CPU PGDs

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.