|
Message-ID: <7497bd44-1fda-e073-ba7f-18a76577b64a@redhat.com> Date: Tue, 12 Feb 2019 12:01:36 -0800 From: Laura Abbott <labbott@...hat.com> To: Khalid Aziz <khalid.aziz@...cle.com>, Konrad Rzeszutek Wilk <konrad.wilk@...cle.com> Cc: juergh@...il.com, tycho@...ho.ws, jsteckli@...zon.de, ak@...ux.intel.com, torvalds@...ux-foundation.org, liran.alon@...cle.com, keescook@...gle.com, Juerg Haefliger <juerg.haefliger@...onical.com>, deepa.srinivasan@...cle.com, chris.hyser@...cle.com, tyhicks@...onical.com, dwmw@...zon.co.uk, andrew.cooper3@...rix.com, jcm@...hat.com, boris.ostrovsky@...cle.com, kanth.ghatraju@...cle.com, joao.m.martins@...cle.com, jmattson@...gle.com, pradeep.vincent@...cle.com, john.haxby@...cle.com, tglx@...utronix.de, kirill.shutemov@...ux.intel.com, hch@....de, steven.sistare@...cle.com, kernel-hardening@...ts.openwall.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, Tycho Andersen <tycho@...ker.com> Subject: Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO On 2/12/19 7:52 AM, Khalid Aziz wrote: > On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote: >> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: >>> From: Juerg Haefliger <juerg.haefliger@...onical.com> >>> >>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and >>> provide a hook for updating a single kernel page table entry (which is >>> required by the generic XPFO code). >>> >>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() >>> >>> CC: linux-arm-kernel@...ts.infradead.org >>> Signed-off-by: Juerg Haefliger <juerg.haefliger@...onical.com> >>> Signed-off-by: Tycho Andersen <tycho@...ker.com> >>> Signed-off-by: Khalid Aziz <khalid.aziz@...cle.com> >>> --- >>> arch/arm64/Kconfig | 1 + >>> arch/arm64/mm/Makefile | 2 ++ >>> arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ >>> 3 files changed, 61 insertions(+) >>> create mode 100644 arch/arm64/mm/xpfo.c >>> >>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>> index ea2ab0330e3a..f0a9c0007d23 100644 >>> --- a/arch/arm64/Kconfig >>> +++ b/arch/arm64/Kconfig >>> @@ -171,6 +171,7 @@ config ARM64 >>> select SWIOTLB >>> select SYSCTL_EXCEPTION_TRACE >>> select THREAD_INFO_IN_TASK >>> + select ARCH_SUPPORTS_XPFO >>> help >>> ARM 64-bit (AArch64) Linux support. >>> >>> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile >>> index 849c1df3d214..cca3808d9776 100644 >>> --- a/arch/arm64/mm/Makefile >>> +++ b/arch/arm64/mm/Makefile >>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n >>> >>> obj-$(CONFIG_KASAN) += kasan_init.o >>> KASAN_SANITIZE_kasan_init.o := n >>> + >>> +obj-$(CONFIG_XPFO) += xpfo.o >>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c >>> new file mode 100644 >>> index 000000000000..678e2be848eb >>> --- /dev/null >>> +++ b/arch/arm64/mm/xpfo.c >>> @@ -0,0 +1,58 @@ >>> +/* >>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. >>> + * Copyright (C) 2016 Brown University. All rights reserved. >>> + * >>> + * Authors: >>> + * Juerg Haefliger <juerg.haefliger@....com> >>> + * Vasileios P. Kemerlis <vpk@...brown.edu> >>> + * >>> + * This program is free software; you can redistribute it and/or modify it >>> + * under the terms of the GNU General Public License version 2 as published by >>> + * the Free Software Foundation. >>> + */ >>> + >>> +#include <linux/mm.h> >>> +#include <linux/module.h> >>> + >>> +#include <asm/tlbflush.h> >>> + >>> +/* >>> + * Lookup the page table entry for a virtual address and return a pointer to >>> + * the entry. Based on x86 tree. >>> + */ >>> +static pte_t *lookup_address(unsigned long addr) >>> +{ >>> + pgd_t *pgd; >>> + pud_t *pud; >>> + pmd_t *pmd; >>> + >>> + pgd = pgd_offset_k(addr); >>> + if (pgd_none(*pgd)) >>> + return NULL; >>> + >>> + pud = pud_offset(pgd, addr); >>> + if (pud_none(*pud)) >>> + return NULL; >>> + >>> + pmd = pmd_offset(pud, addr); >>> + if (pmd_none(*pmd)) >>> + return NULL; >>> + >>> + return pte_offset_kernel(pmd, addr); >>> +} >>> + >>> +/* Update a single kernel page table entry */ >>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) >>> +{ >>> + pte_t *pte = lookup_address((unsigned long)kaddr); >>> + >>> + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); >> >> Thought on the other hand.. what if the page is PMD? Do you really want >> to do this? >> >> What if 'pte' is NULL? >>> +} >>> + >>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order) >>> +{ >>> + unsigned long kaddr = (unsigned long)page_address(page); >>> + unsigned long size = PAGE_SIZE; >>> + >>> + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); >> >> Ditto here. You are assuming it is PTE, but it may be PMD or such. >> Or worts - the lookup_address could be NULL. >> >>> +} >>> -- >>> 2.17.1 >>> > > Hi Konrad, > > This makes sense. x86 version of set_kpte() checks pte for NULL and also > checks if the page is PMD. Now what you said about adding level to > lookup_address() for arm makes more sense. > > Can someone with knowledge of arm64 mmu make recommendations here? > > Thanks, > Khalid > arm64 can't split larger pages and requires everything must be mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled) . Any situation where we would get something other than a pte would be a bug. Thanks, Laura
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.