|
Message-Id: <378ee1e7e4c17e3bf6e49e1fb6c7cd9abd18ccfe.1549927666.git.igor.stoppa@huawei.com> Date: Tue, 12 Feb 2019 01:27:40 +0200 From: Igor Stoppa <igor.stoppa@...il.com> To: Cc: Igor Stoppa <igor.stoppa@...wei.com>, Andy Lutomirski <luto@...capital.net>, Nadav Amit <nadav.amit@...il.com>, Matthew Wilcox <willy@...radead.org>, Peter Zijlstra <peterz@...radead.org>, Kees Cook <keescook@...omium.org>, Dave Hansen <dave.hansen@...ux.intel.com>, Mimi Zohar <zohar@...ux.vnet.ibm.com>, Thiago Jung Bauermann <bauerman@...ux.ibm.com>, Ahmed Soliman <ahmedsoliman@...a.vt.edu>, linux-integrity@...r.kernel.org, kernel-hardening@...ts.openwall.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: [RFC PATCH v4 03/12] __wr_after_init: x86_64: randomize mapping offset x86_64 specialized way of defining the base address for the alternate mapping used by write-rare. Since the kernel address space spans across 64TB and it is mapped into a used address space of 128TB, the kernel address space can be shifted by a random offset that is up to 64TB and page aligned. This is accomplished by providing arch-specific version of the function __init_wr_base() Signed-off-by: Igor Stoppa <igor.stoppa@...wei.com> CC: Andy Lutomirski <luto@...capital.net> CC: Nadav Amit <nadav.amit@...il.com> CC: Matthew Wilcox <willy@...radead.org> CC: Peter Zijlstra <peterz@...radead.org> CC: Kees Cook <keescook@...omium.org> CC: Dave Hansen <dave.hansen@...ux.intel.com> CC: Mimi Zohar <zohar@...ux.vnet.ibm.com> CC: Thiago Jung Bauermann <bauerman@...ux.ibm.com> CC: Ahmed Soliman <ahmedsoliman@...a.vt.edu> CC: linux-integrity@...r.kernel.org CC: kernel-hardening@...ts.openwall.com CC: linux-mm@...ck.org CC: linux-kernel@...r.kernel.org --- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/prmem.c (new) | 20 ++++++++++++++++++++ 2 files changed, 22 insertions(+) diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 4b101dd6e52f..66652de1e2c7 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -53,3 +53,5 @@ obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o + +obj-$(CONFIG_PRMEM) += prmem.o diff --git a/arch/x86/mm/prmem.c b/arch/x86/mm/prmem.c new file mode 100644 index 000000000000..b04fc03f92fb --- /dev/null +++ b/arch/x86/mm/prmem.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * prmem.c: Memory Protection Library - x86_64 backend + * + * (C) Copyright 2018-2019 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@...wei.com> + */ + +#include <linux/mm.h> +#include <linux/mmu_context.h> + +unsigned long __init __init_wr_base(void) +{ + /* + * Place 64TB of kernel address space within 128TB of user address + * space, at a random page aligned offset. + */ + return (((unsigned long)kaslr_get_random_long("WR Poke")) & + PAGE_MASK) % (64 * _BITUL(40)); +} -- 2.19.1
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.