|
Message-Id: <1467843928-29351-10-git-send-email-keescook@chromium.org> Date: Wed, 6 Jul 2016 15:25:28 -0700 From: Kees Cook <keescook@...omium.org> To: linux-kernel@...r.kernel.org Cc: Kees Cook <keescook@...omium.org>, Rik van Riel <riel@...hat.com>, Casey Schaufler <casey@...aufler-ca.com>, PaX Team <pageexec@...email.hu>, Brad Spengler <spender@...ecurity.net>, Russell King <linux@...linux.org.uk>, Catalin Marinas <catalin.marinas@....com>, Will Deacon <will.deacon@....com>, Ard Biesheuvel <ard.biesheuvel@...aro.org>, Benjamin Herrenschmidt <benh@...nel.crashing.org>, Michael Ellerman <mpe@...erman.id.au>, Tony Luck <tony.luck@...el.com>, Fenghua Yu <fenghua.yu@...el.com>, "David S. Miller" <davem@...emloft.net>, x86@...nel.org, Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org>, Andy Lutomirski <luto@...nel.org>, Borislav Petkov <bp@...e.de>, Mathias Krause <minipli@...glemail.com>, Jan Kara <jack@...e.cz>, Vitaly Wool <vitalywool@...il.com>, Andrea Arcangeli <aarcange@...hat.com>, Dmitry Vyukov <dvyukov@...gle.com>, Laura Abbott <labbott@...oraproject.org>, linux-arm-kernel@...ts.infradead.org, linux-ia64@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, sparclinux@...r.kernel.org, linux-arch@...r.kernel.org, linux-mm@...ck.org, kernel-hardening@...ts.openwall.com Subject: [PATCH 9/9] mm: SLUB hardened usercopy support Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the SLUB allocator to catch any copies that may span objects. Based on code from PaX and grsecurity. Signed-off-by: Kees Cook <keescook@...omium.org> --- init/Kconfig | 1 + mm/slub.c | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/init/Kconfig b/init/Kconfig index 798c2020ee7c..1c4711819dfd 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1765,6 +1765,7 @@ config SLAB config SLUB bool "SLUB (Unqueued Allocator)" + select HAVE_HARDENED_USERCOPY_ALLOCATOR help SLUB is a slab allocator that minimizes cache line usage instead of managing queues of cached objects (SLAB approach). diff --git a/mm/slub.c b/mm/slub.c index 825ff4505336..0c8ace04f075 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3614,6 +3614,33 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) EXPORT_SYMBOL(__kmalloc_node); #endif +#ifdef CONFIG_HARDENED_USERCOPY +/* + * Rejects objects that are incorrectly sized. + * + * Returns NULL if check passes, otherwise const char * to name of cache + * to indicate an error. + */ +const char *__check_heap_object(const void *ptr, unsigned long n, + struct page *page) +{ + struct kmem_cache *s; + unsigned long offset; + + /* Find object. */ + s = page->slab_cache; + + /* Find offset within object. */ + offset = (ptr - page_address(page)) % s->size; + + /* Allow address range falling entirely within object size. */ + if (offset <= s->object_size && n <= s->object_size - offset) + return NULL; + + return s->name; +} +#endif /* CONFIG_HARDENED_USERCOPY */ + static size_t __ksize(const void *object) { struct page *page; -- 2.7.4
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.