|
Message-ID: <87lfiso3rk.fsf@dja-thinkpad.axtens.net> Date: Thu, 06 Aug 2020 13:24:31 +1000 From: Daniel Axtens <dja@...ens.net> To: "Christopher M. Riedl" <cmr@...ormatik.wtf>, linuxppc-dev@...ts.ozlabs.org Cc: kernel-hardening@...ts.openwall.com Subject: Re: [PATCH v2 2/5] powerpc/lib: Initialize a temporary mm for code patching "Christopher M. Riedl" <cmr@...ormatik.wtf> writes: > When code patching a STRICT_KERNEL_RWX kernel the page containing the > address to be patched is temporarily mapped with permissive memory > protections. Currently, a per-cpu vmalloc patch area is used for this > purpose. While the patch area is per-cpu, the temporary page mapping is > inserted into the kernel page tables for the duration of the patching. > The mapping is exposed to CPUs other than the patching CPU - this is > undesirable from a hardening perspective. > > Use the `poking_init` init hook to prepare a temporary mm and patching > address. Initialize the temporary mm by copying the init mm. Choose a > randomized patching address inside the temporary mm userspace address > portion. The next patch uses the temporary mm and patching address for > code patching. > > Based on x86 implementation: > > commit 4fc19708b165 > ("x86/alternatives: Initialize temporary mm for patching") > > Signed-off-by: Christopher M. Riedl <cmr@...ormatik.wtf> > --- > arch/powerpc/lib/code-patching.c | 33 ++++++++++++++++++++++++++++++++ > 1 file changed, 33 insertions(+) > > diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c > index 0a051dfeb177..8ae1a9e5fe6e 100644 > --- a/arch/powerpc/lib/code-patching.c > +++ b/arch/powerpc/lib/code-patching.c > @@ -11,6 +11,8 @@ > #include <linux/cpuhotplug.h> > #include <linux/slab.h> > #include <linux/uaccess.h> > +#include <linux/sched/task.h> > +#include <linux/random.h> > > #include <asm/tlbflush.h> > #include <asm/page.h> > @@ -44,6 +46,37 @@ int raw_patch_instruction(struct ppc_inst *addr, struct ppc_inst instr) > } > > #ifdef CONFIG_STRICT_KERNEL_RWX > + > +static struct mm_struct *patching_mm __ro_after_init; > +static unsigned long patching_addr __ro_after_init; > + > +void __init poking_init(void) > +{ > + spinlock_t *ptl; /* for protecting pte table */ > + pte_t *ptep; > + > + /* > + * Some parts of the kernel (static keys for example) depend on > + * successful code patching. Code patching under STRICT_KERNEL_RWX > + * requires this setup - otherwise we cannot patch at all. We use > + * BUG_ON() here and later since an early failure is preferred to > + * buggy behavior and/or strange crashes later. > + */ > + patching_mm = copy_init_mm(); > + BUG_ON(!patching_mm); > + > + /* > + * In hash we cannot go above DEFAULT_MAP_WINDOW easily. > + * XXX: Do we want additional bits of entropy for radix? > + */ > + patching_addr = (get_random_long() & PAGE_MASK) % > + (DEFAULT_MAP_WINDOW - PAGE_SIZE); It took me a while to understand this calculation. I see that it's calculating a base address for a page in which to do patching. It does the following: - get a random long - mask with PAGE_MASK so as to get a page aligned value - make sure that the base address is at least one PAGE_SIZE below DEFAULT_MAP_WINDOW so we have a clear page between the base and DEFAULT_MAP_WINDOW. On 64-bit Book3S with 64K pages, that works out to be PAGE_SIZE = 0x0000 0000 0001 0000 PAGE_MASK = 0xFFFF FFFF FFFF 0000 DEFAULT_MAP_WINDOW = DEFAULT_MAP_WINDOW_USER64 = TASK_SIZE_128TB = 0x0000_8000_0000_0000 DEFAULT_MAP_WINDOW - PAGE_SIZE = 0x0000 7FFF FFFF 0000 It took a while (and a conversation with my wife who studied pure maths!) but I am convinced that the modulo preserves the page-alignement of the patching address. One thing I did realise is that patching_addr can be zero at the end of this process. That seems dubious and slightly error-prone to me - is the patching process robust to that or should we exclude it? Anyway, if I have the maths right, that there are 0x7fffffff or ~2 billion possible locations for the patching page, which is just shy of 31 bits of entropy. I think this compares pretty favourably to most (K)ASLR implementations? What's the range if built with 4k pages? Kind regards, Daniel > + > + ptep = get_locked_pte(patching_mm, patching_addr, &ptl); > + BUG_ON(!ptep); > + pte_unmap_unlock(ptep, ptl); > +} > + > static DEFINE_PER_CPU(struct vm_struct *, text_poke_area); > > static int text_area_cpu_up(unsigned int cpu) > -- > 2.27.0
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.