Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191004132555.202973-1-glider@google.com>
Date: Fri,  4 Oct 2019 15:25:54 +0200
From: Alexander Potapenko <glider@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Christoph Lameter <cl@...ux.com>
Cc: Alexander Potapenko <glider@...gle.com>, Thibaut Sautereau <thibaut@...tereau.fr>, 
	Kees Cook <keescook@...omium.org>, Laura Abbott <labbott@...hat.com>, linux-mm@...ck.org, 
	kernel-hardening@...ts.openwall.com
Subject: [PATCH v1 1/2] mm: slub: init_on_free=1 should wipe freelist ptr for
 bulk allocations

slab_alloc_node() already zeroed out the freelist pointer if
init_on_free was on.
Thibaut Sautereau noticed that the same needs to be done for
kmem_cache_alloc_bulk(), which performs the allocations separately.

kmem_cache_alloc_bulk() is currently used in two places in the kernel,
so this change is unlikely to have a major performance impact.

SLAB doesn't require a similar change, as auto-initialization makes the
allocator store the freelist pointers off-slab.

Reported-by: Thibaut Sautereau <thibaut@...tereau.fr>
Reported-by: Kees Cook <keescook@...omium.org>
Signed-off-by: Alexander Potapenko <glider@...gle.com>
Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
To: Andrew Morton <akpm@...ux-foundation.org>
To: Christoph Lameter <cl@...ux.com>
Cc: Laura Abbott <labbott@...hat.com>
Cc: linux-mm@...ck.org
Cc: kernel-hardening@...ts.openwall.com
---
 mm/slub.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 8834563cdb4b..fe90bed40eb3 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2669,6 +2669,16 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	return p;
 }
 
+/*
+ * If the object has been wiped upon free, make sure it's fully initialized by
+ * zeroing out freelist pointer.
+ */
+static __always_inline maybe_wipe_obj_freeptr(struct kmem_cache *s, void *obj)
+{
+	if (unlikely(slab_want_init_on_free(s)) && obj)
+		memset((void *)((char *)obj + s->offset), 0, sizeof(void *));
+}
+
 /*
  * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
  * have the fastpath folded into their functions. So no function call
@@ -2757,12 +2767,8 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 		prefetch_freepointer(s, next_object);
 		stat(s, ALLOC_FASTPATH);
 	}
-	/*
-	 * If the object has been wiped upon free, make sure it's fully
-	 * initialized by zeroing out freelist pointer.
-	 */
-	if (unlikely(slab_want_init_on_free(s)) && object)
-		memset(object + s->offset, 0, sizeof(void *));
+
+	maybe_wipe_obj_freeptr(s, object);
 
 	if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object)
 		memset(object, 0, s->object_size);
@@ -3176,10 +3182,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 				goto error;
 
 			c = this_cpu_ptr(s->cpu_slab);
+			maybe_wipe_obj_freeptr(s, p[i]);
+
 			continue; /* goto for-loop */
 		}
 		c->freelist = get_freepointer(s, object);
 		p[i] = object;
+		maybe_wipe_obj_freeptr(s, p[i]);
 	}
 	c->tid = next_tid(c->tid);
 	local_irq_enable();
-- 
2.23.0.581.g78d2f28ef7-goog

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.