|
|
Message-Id: <5bab13e12d4215112ad2180106cc6bb9b513754a.1554248002.git.khalid.aziz@oracle.com>
Date: Wed, 3 Apr 2019 11:34:12 -0600
From: Khalid Aziz <khalid.aziz@...cle.com>
To: juergh@...il.com, tycho@...ho.ws, jsteckli@...zon.de, ak@...ux.intel.com,
liran.alon@...cle.com, keescook@...gle.com, konrad.wilk@...cle.com
Cc: deepa.srinivasan@...cle.com, chris.hyser@...cle.com, tyhicks@...onical.com,
dwmw@...zon.co.uk, andrew.cooper3@...rix.com, jcm@...hat.com,
boris.ostrovsky@...cle.com, kanth.ghatraju@...cle.com,
joao.m.martins@...cle.com, jmattson@...gle.com,
pradeep.vincent@...cle.com, john.haxby@...cle.com, tglx@...utronix.de,
kirill.shutemov@...ux.intel.com, hch@....de, steven.sistare@...cle.com,
labbott@...hat.com, luto@...nel.org, dave.hansen@...el.com,
peterz@...radead.org, aaron.lu@...el.com, akpm@...ux-foundation.org,
alexander.h.duyck@...ux.intel.com, amir73il@...il.com,
andreyknvl@...gle.com, aneesh.kumar@...ux.ibm.com,
anthony.yznaga@...cle.com, ard.biesheuvel@...aro.org, arnd@...db.de,
arunks@...eaurora.org, ben@...adent.org.uk, bigeasy@...utronix.de,
bp@...en8.de, brgl@...ev.pl, catalin.marinas@....com, corbet@....net,
cpandya@...eaurora.org, daniel.vetter@...ll.ch,
dan.j.williams@...el.com, gregkh@...uxfoundation.org, guro@...com,
hannes@...xchg.org, hpa@...or.com, iamjoonsoo.kim@....com,
james.morse@....com, jannh@...gle.com, jgross@...e.com,
jkosina@...e.cz, jmorris@...ei.org, joe@...ches.com,
jrdr.linux@...il.com, jroedel@...e.de, keith.busch@...el.com,
khalid.aziz@...cle.com, khlebnikov@...dex-team.ru, logang@...tatee.com,
marco.antonio.780@...il.com, mark.rutland@....com,
mgorman@...hsingularity.net, mhocko@...e.com, mhocko@...e.cz,
mike.kravetz@...cle.com, mingo@...hat.com, mst@...hat.com,
m.szyprowski@...sung.com, npiggin@...il.com, osalvador@...e.de,
paulmck@...ux.vnet.ibm.com, pavel.tatashin@...rosoft.com,
rdunlap@...radead.org, richard.weiyang@...il.com, riel@...riel.com,
rientjes@...gle.com, robin.murphy@....com, rostedt@...dmis.org,
rppt@...ux.vnet.ibm.com, sai.praneeth.prakhya@...el.com,
serge@...lyn.com, steve.capper@....com, thymovanbeers@...il.com,
vbabka@...e.cz, will.deacon@....com, willy@...radead.org,
yang.shi@...ux.alibaba.com, yaojun8558363@...il.com,
ying.huang@...el.com, zhangshaokun@...ilicon.com,
iommu@...ts.linux-foundation.org, x86@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-security-module@...r.kernel.org,
Khalid Aziz <khalid@...ehiking.org>,
kernel-hardening@...ts.openwall.com,
"Vasileios P . Kemerlis" <vpk@...columbia.edu>,
Juerg Haefliger <juerg.haefliger@...onical.com>,
David Woodhouse <dwmw2@...radead.org>
Subject: [RFC PATCH v9 11/13] xpfo, mm: optimize spinlock usage in xpfo_kunmap
From: Julian Stecklina <jsteckli@...zon.de>
Only the xpfo_kunmap call that needs to actually unmap the page
needs to be serialized. We need to be careful to handle the case,
where after the atomic decrement of the mapcount, a xpfo_kmap
increased the mapcount again. In this case, we can safely skip
modifying the page table.
Model-checked with up to 4 concurrent callers with Spin.
Signed-off-by: Julian Stecklina <jsteckli@...zon.de>
Signed-off-by: Khalid Aziz <khalid.aziz@...cle.com>
Cc: Khalid Aziz <khalid@...ehiking.org>
Cc: x86@...nel.org
Cc: kernel-hardening@...ts.openwall.com
Cc: Vasileios P. Kemerlis <vpk@...columbia.edu>
Cc: Juerg Haefliger <juerg.haefliger@...onical.com>
Cc: Tycho Andersen <tycho@...ho.ws>
Cc: Marco Benatto <marco.antonio.780@...il.com>
Cc: David Woodhouse <dwmw2@...radead.org>
---
include/linux/xpfo.h | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h
index 2318c7eb5fb7..37e7f52fa6ce 100644
--- a/include/linux/xpfo.h
+++ b/include/linux/xpfo.h
@@ -61,6 +61,7 @@ static inline void xpfo_kmap(void *kaddr, struct page *page)
static inline void xpfo_kunmap(void *kaddr, struct page *page)
{
unsigned long flags;
+ bool flush_tlb = false;
if (!static_branch_unlikely(&xpfo_inited))
return;
@@ -72,18 +73,23 @@ static inline void xpfo_kunmap(void *kaddr, struct page *page)
* The page is to be allocated back to user space, so unmap it from
* the kernel, flush the TLB and tag it as a user page.
*/
- spin_lock_irqsave(&page->xpfo_lock, flags);
-
if (atomic_dec_return(&page->xpfo_mapcount) == 0) {
-#ifdef CONFIG_XPFO_DEBUG
- WARN_ON(PageXpfoUnmapped(page));
-#endif
- SetPageXpfoUnmapped(page);
- set_kpte(kaddr, page, __pgprot(0));
- xpfo_flush_kernel_tlb(page, 0);
+ spin_lock_irqsave(&page->xpfo_lock, flags);
+
+ /*
+ * In the case, where we raced with kmap after the
+ * atomic_dec_return, we must not nuke the mapping.
+ */
+ if (atomic_read(&page->xpfo_mapcount) == 0) {
+ SetPageXpfoUnmapped(page);
+ set_kpte(kaddr, page, __pgprot(0));
+ flush_tlb = true;
+ }
+ spin_unlock_irqrestore(&page->xpfo_lock, flags);
}
- spin_unlock_irqrestore(&page->xpfo_lock, flags);
+ if (flush_tlb)
+ xpfo_flush_kernel_tlb(page, 0);
}
void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp, bool will_map);
--
2.17.1
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.