|
Message-ID: <1534861342.14722.11.camel@infradead.org>
Date: Tue, 21 Aug 2018 15:22:22 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: Liran Alon <liran.alon@...cle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, Konrad Rzeszutek Wilk
<konrad.wilk@...cle.com>, juerg.haefliger@....com,
deepa.srinivasan@...cle.com, Jim Mattson <jmattson@...gle.com>, Andrew
Cooper <andrew.cooper3@...rix.com>, Linux Kernel Mailing List
<linux-kernel@...r.kernel.org>, Boris Ostrovsky
<boris.ostrovsky@...cle.com>, linux-mm <linux-mm@...ck.org>, Thomas
Gleixner <tglx@...utronix.de>, joao.m.martins@...cle.com,
pradeep.vincent@...cle.com, Andi Kleen <ak@...ux.intel.com>, Khalid Aziz
<khalid.aziz@...cle.com>, kanth.ghatraju@...cle.com, Kees Cook
<keescook@...gle.com>, jsteckli@...inf.tu-dresden.de, Kernel Hardening
<kernel-hardening@...ts.openwall.com>, chris.hyser@...cle.com, Tyler Hicks
<tyhicks@...onical.com>, John Haxby <john.haxby@...cle.com>, Jon Masters
<jcm@...hat.com>
Subject: Re: Redoing eXclusive Page Frame Ownership (XPFO) with isolated
CPUs in mind (for KVM to isolate its guests per CPU)
On Tue, 2018-08-21 at 17:01 +0300, Liran Alon wrote:
>
> > On 21 Aug 2018, at 12:57, David Woodhouse <dwmw2@...radead.org>
> wrote:
> >
> > Another alternative... I'm told POWER8 does an interesting thing
> with
> > hyperthreading and gang scheduling for KVM. The host kernel doesn't
> > actually *see* the hyperthreads at all, and KVM just launches the
> full
> > set of siblings when it enters a guest, and gathers them again when
> any
> > of them exits. That's definitely worth investigating as an option
> for
> > x86, too.
>
> I actually think that such scheduling mechanism which prevents
> leaking cache entries to sibling hyperthreads should co-exist
> together with the KVM address space isolation to fully mitigate L1TF
> and other similar vulnerabilities. The address space isolation should
> prevent VMExit handlers code gadgets from loading arbitrary host
> memory to the cache. Once VMExit code path switches to full host
> address space, then we should also make sure that no other sibling
> hyprethread is running in the guest.
The KVM POWER8 solution (see arch/powerpc/kvm/book3s_hv.c) does that.
The siblings are *never* running host kernel code; they're all torn
down when any of them exits the guest. And it's always the *same*
guest.
> Focusing on the scheduling mechanism, we must make sure that when a
> logical processor runs guest code, all siblings logical processors
> must run code which do not populate L1D cache with information
> unrelated to this VM. This includes forbidding one logical processor
> to run guest code while sibling is running a host task such as a NIC
> interrupt handler.
> Thus, when a vCPU thread exits the guest into the host and VMExit
> handler reaches code flow which could populate L1D cache with this
> information, we should force an exit from the guest of the siblings
> logical processors, such that they will be allowed to resume only on
> a core which we can promise that the L1D cache is free from
> information unrelated to this VM.
>
> At first, I have created a patch series which attempts to implement
> such mechanism in KVM. However, it became clear to me that this may
> need to be implemented in the scheduler itself. This is because:
> 1. It is difficult to handle all new scheduling contrains only in
> KVM.
> 2. This mechanism should be relevant for any Type-2 hypervisor which
> runs inside Linux besides KVM (Such as VMware Workstation or
> VirtualBox).
> 3. This mechanism could also be used to prevent future “core-cache-
> leaking” vulnerabilities to be exploited between processes of
> different security domains which run as siblings on the same core.
I'm not sure I agree. If KVM is handling "only let siblings run the
*same* guest" and the siblings aren't visible to the host at all,
that's quite simple. Any other hypervisor can also do it.
Now, the down-side of this is that the siblings aren't visible to the
host. They can't be used to run multiple threads of the same userspace
processes; only multiple threads of the same KVM guest. A truly generic
core scheduler would cope with userspace threads too.
BUT I strongly suspect there's a huge correlation between the set of
people who care enough about the KVM/L1TF issue to enable a costly
XFPO-like solution, and the set of people who mostly don't give a shit
about having sibling CPUs available to run the host's userspace anyway.
This is not the "I happen to run a Windows VM on my Linux desktop" use
case...
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5213 bytes)
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.