|
Message-ID: <11ef88f7-e418-c48e-f96c-1256c1179bca@huawei.com> Date: Thu, 29 Aug 2019 14:26:47 +0800 From: Jason Yan <yanaijie@...wei.com> To: Christophe Leroy <christophe.leroy@....fr>, Scott Wood <oss@...error.net> CC: <mpe@...erman.id.au>, <linuxppc-dev@...ts.ozlabs.org>, <diana.craciun@....com>, <benh@...nel.crashing.org>, <paulus@...ba.org>, <npiggin@...il.com>, <keescook@...omium.org>, <kernel-hardening@...ts.openwall.com>, <wangkefeng.wang@...wei.com>, <linux-kernel@...r.kernel.org>, <jingxiangfeng@...wei.com>, <zhaohongjiang@...wei.com>, <thunder.leizhen@...wei.com>, <fanchengyang@...wei.com>, <yebin10@...wei.com> Subject: Re: [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure On 2019/8/28 13:47, Christophe Leroy wrote: > > > Le 28/08/2019 à 06:54, Scott Wood a écrit : >> On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote: >>> This patch add support to boot kernel from places other than KERNELBASE. >>> Since CONFIG_RELOCATABLE has already supported, what we need to do is >>> map or copy kernel to a proper place and relocate. Freescale Book-E >>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 >>> entries are not suitable to map the kernel directly in a randomized >>> region, so we chose to copy the kernel to a proper place and restart to >>> relocate. >>> >>> The offset of the kernel was not randomized yet(a fixed 64M is set). We >>> will randomize it in the next patch. >>> >>> Signed-off-by: Jason Yan <yanaijie@...wei.com> >>> Cc: Diana Craciun <diana.craciun@....com> >>> Cc: Michael Ellerman <mpe@...erman.id.au> >>> Cc: Christophe Leroy <christophe.leroy@....fr> >>> Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org> >>> Cc: Paul Mackerras <paulus@...ba.org> >>> Cc: Nicholas Piggin <npiggin@...il.com> >>> Cc: Kees Cook <keescook@...omium.org> >>> Tested-by: Diana Craciun <diana.craciun@....com> >>> Reviewed-by: Christophe Leroy <christophe.leroy@....fr> >>> --- >>> arch/powerpc/Kconfig | 11 ++++ >>> arch/powerpc/kernel/Makefile | 1 + >>> arch/powerpc/kernel/early_32.c | 2 +- >>> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++-- >>> arch/powerpc/kernel/head_fsl_booke.S | 13 +++- >>> arch/powerpc/kernel/kaslr_booke.c | 62 +++++++++++++++++++ >>> arch/powerpc/mm/mmu_decl.h | 7 +++ >>> arch/powerpc/mm/nohash/fsl_booke.c | 7 ++- >>> 8 files changed, 105 insertions(+), 15 deletions(-) >>> create mode 100644 arch/powerpc/kernel/kaslr_booke.c >>> > > [...] > >>> diff --git a/arch/powerpc/kernel/kaslr_booke.c >>> b/arch/powerpc/kernel/kaslr_booke.c >>> new file mode 100644 >>> index 000000000000..f8dc60534ac1 >>> --- /dev/null >>> +++ b/arch/powerpc/kernel/kaslr_booke.c >> >> Shouldn't this go under arch/powerpc/mm/nohash? >> >>> +/* >>> + * To see if we need to relocate the kernel to a random offset >>> + * void *dt_ptr - address of the device tree >>> + * phys_addr_t size - size of the first memory block >>> + */ >>> +notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size) >>> +{ >>> + unsigned long tlb_virt; >>> + phys_addr_t tlb_phys; >>> + unsigned long offset; >>> + unsigned long kernel_sz; >>> + >>> + kernel_sz = (unsigned long)_end - KERNELBASE; >> >> Why KERNELBASE and not kernstart_addr? >> >>> + >>> + offset = kaslr_choose_location(dt_ptr, size, kernel_sz); >>> + >>> + if (offset == 0) >>> + return; >>> + >>> + kernstart_virt_addr += offset; >>> + kernstart_addr += offset; >>> + >>> + is_second_reloc = 1; >>> + >>> + if (offset >= SZ_64M) { >>> + tlb_virt = round_down(kernstart_virt_addr, SZ_64M); >>> + tlb_phys = round_down(kernstart_addr, SZ_64M); >> >> If kernstart_addr wasn't 64M-aligned before adding offset, then "offset >>> = SZ_64M" is not necessarily going to detect when you've crossed a >> mapping boundary. >> >>> + >>> + /* Create kernel map to relocate in */ >>> + create_tlb_entry(tlb_phys, tlb_virt, 1); >>> + } >>> + >>> + /* Copy the kernel to it's new location and run */ >>> + memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz); >>> + >>> + reloc_kernel_entry(dt_ptr, kernstart_virt_addr); >>> +} >> >> After copying, call flush_icache_range() on the destination. > > Function copy_and_flush() does the copy and the flush. I think it should > be used instead of memcpy() + flush_icache_range() > Hi Christophe, Thanks for the suggestion. But I think copy_and_flush() is not included in fsl booke code, maybe move this function to misc.S? > Christophe > > . >
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.