|
Message-ID: <1b46e0a5-476b-1eaa-3376-6848caf9e7ab@oracle.com> Date: Thu, 17 Jan 2019 08:14:41 -0700 From: Khalid Aziz <khalid.aziz@...cle.com> To: Laura Abbott <labbott@...hat.com>, juergh@...il.com, tycho@...ho.ws, jsteckli@...zon.de, ak@...ux.intel.com, torvalds@...ux-foundation.org, liran.alon@...cle.com, keescook@...gle.com, konrad.wilk@...cle.com Cc: deepa.srinivasan@...cle.com, chris.hyser@...cle.com, tyhicks@...onical.com, dwmw@...zon.co.uk, andrew.cooper3@...rix.com, jcm@...hat.com, boris.ostrovsky@...cle.com, kanth.ghatraju@...cle.com, joao.m.martins@...cle.com, jmattson@...gle.com, pradeep.vincent@...cle.com, john.haxby@...cle.com, tglx@...utronix.de, kirill.shutemov@...ux.intel.com, hch@....de, steven.sistare@...cle.com, kernel-hardening@...ts.openwall.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org, x86@...nel.org, "Vasileios P . Kemerlis" <vpk@...columbia.edu>, Juerg Haefliger <juerg.haefliger@...onical.com>, Tycho Andersen <tycho@...ker.com>, Marco Benatto <marco.antonio.780@...il.com>, David Woodhouse <dwmw2@...radead.org> Subject: Re: [RFC PATCH v7 14/16] EXPERIMENTAL: xpfo, mm: optimize spin lock usage in xpfo_kmap On 1/16/19 5:18 PM, Laura Abbott wrote: > On 1/10/19 1:09 PM, Khalid Aziz wrote: >> From: Julian Stecklina <jsteckli@...zon.de> >> >> We can reduce spin lock usage in xpfo_kmap to the 0->1 transition of >> the mapcount. This means that xpfo_kmap() can now race and that we >> get spurious page faults. >> >> The page fault handler helps the system make forward progress by >> fixing the page table instead of allowing repeated page faults until >> the right xpfo_kmap went through. >> >> Model-checked with up to 4 concurrent callers with Spin. >> > > This needs the spurious check for arm64 as well. This at > least gets me booting but could probably use more review: > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 7d9571f4ae3d..8f425848cbb9 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -32,6 +32,7 @@ > #include <linux/perf_event.h> > #include <linux/preempt.h> > #include <linux/hugetlb.h> > +#include <linux/xpfo.h> > > #include <asm/bug.h> > #include <asm/cmpxchg.h> > @@ -289,6 +290,9 @@ static void __do_kernel_fault(unsigned long addr, > unsigned int esr, > if (!is_el1_instruction_abort(esr) && fixup_exception(regs)) > return; > > + if (xpfo_spurious_fault(addr)) > + return; > + > if (is_el1_permission_fault(addr, esr, regs)) { > if (esr & ESR_ELx_WNR) > msg = "write to read-only memory"; > > That makes sense. Thanks for debugging this. I will add this to patch 14 ("EXPERIMENTAL: xpfo, mm: optimize spin lock usage in xpfo_kmap"). Thanks, Khalid
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.