|
Message-ID: <alpine.DEB.2.00.1107181610350.31576@router.home> Date: Mon, 18 Jul 2011 16:18:05 -0500 (CDT) From: Christoph Lameter <cl@...ux.com> To: Vasiliy Kulikov <segoon@...nwall.com> cc: Linus Torvalds <torvalds@...ux-foundation.org>, kernel-hardening@...ts.openwall.com, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org, Arnd Bergmann <arnd@...db.de>, Pekka Enberg <penberg@...nel.org>, Matt Mackall <mpm@...enic.com>, Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, linux-mm@...ck.org Subject: Re: [RFC v2] implement SL*B and stack usercopy runtime checks On Mon, 18 Jul 2011, Vasiliy Kulikov wrote: > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3844,6 +3844,40 @@ unsigned int kmem_cache_size(struct kmem_cache *cachep) > EXPORT_SYMBOL(kmem_cache_size); > > /* > + * Returns false if and only if [ptr; ptr+len) touches the slab, > + * but breaks objects boundaries. It doesn't check whether the > + * accessed object is actually allocated. > + */ > +bool slab_access_ok(const void *ptr, unsigned long len) > +{ > + struct page *page; > + struct kmem_cache *cachep = NULL; Why = NULL? > + struct slab *slabp; > + unsigned int objnr; > + unsigned long offset; > + > + if (!len) > + return true; > + if (!virt_addr_valid(ptr)) > + return true; > + page = virt_to_head_page(ptr); > + if (!PageSlab(page)) > + return true; > + > + cachep = page_get_cache(page); > + slabp = page_get_slab(page); > + objnr = obj_to_index(cachep, slabp, (void *)ptr); > + BUG_ON(objnr >= cachep->num); > + offset = (const char *)ptr - obj_offset(cachep) - > + (const char *)index_to_obj(cachep, slabp, objnr); > + if (offset <= obj_size(cachep) && len <= obj_size(cachep) - offset) > + return true; > + > + return false; > +} > +EXPORT_SYMBOL(slab_access_ok); > + > +/* > * This initializes kmem_list3 or resizes various caches for all nodes. > */ > static int alloc_kmemlist(struct kmem_cache *cachep, gfp_t gfp) > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2623,6 +2623,34 @@ unsigned int kmem_cache_size(struct kmem_cache *s) > } > EXPORT_SYMBOL(kmem_cache_size); > > +/* > + * Returns false if and only if [ptr; ptr+len) touches the slab, > + * but breaks objects boundaries. It doesn't check whether the > + * accessed object is actually allocated. > + */ > +bool slab_access_ok(const void *ptr, unsigned long len) > +{ > + struct page *page; > + struct kmem_cache *s = NULL; No need to assign NULL. > + unsigned long offset; > + > + if (len == 0) > + return true; > + if (!virt_addr_valid(ptr)) > + return true; > + page = virt_to_head_page(ptr); > + if (!PageSlab(page)) > + return true; > + > + s = page->slab; > + offset = ((const char *)ptr - (const char *)page_address(page)) % s->size; Are the casts necessary? Both are pointers to void * > + if (offset <= s->objsize && len <= s->objsize - offset) If offset == s->objsize then we access the first byte after the object.
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.