|
Message-ID: <20110719065322.GA3228@albatros> Date: Tue, 19 Jul 2011 10:53:22 +0400 From: Vasiliy Kulikov <segoon@...nwall.com> To: Christoph Lameter <cl@...ux.com> Cc: Linus Torvalds <torvalds@...ux-foundation.org>, kernel-hardening@...ts.openwall.com, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org, Arnd Bergmann <arnd@...db.de>, Pekka Enberg <penberg@...nel.org>, Matt Mackall <mpm@...enic.com>, Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, linux-mm@...ck.org Subject: Re: [RFC v2] implement SL*B and stack usercopy runtime checks On Mon, Jul 18, 2011 at 16:18 -0500, Christoph Lameter wrote: > On Mon, 18 Jul 2011, Vasiliy Kulikov wrote: > > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -3844,6 +3844,40 @@ unsigned int kmem_cache_size(struct kmem_cache *cachep) > > EXPORT_SYMBOL(kmem_cache_size); > > > > /* > > + * Returns false if and only if [ptr; ptr+len) touches the slab, > > + * but breaks objects boundaries. It doesn't check whether the > > + * accessed object is actually allocated. > > + */ > > +bool slab_access_ok(const void *ptr, unsigned long len) > > +{ > > + struct page *page; > > + struct kmem_cache *cachep = NULL; > > Why = NULL? Indeed, redundant. > > + struct slab *slabp; > > + unsigned int objnr; > > + unsigned long offset; > > + > > + if (!len) > > + return true; > > + if (!virt_addr_valid(ptr)) > > + return true; > > + page = virt_to_head_page(ptr); > > + if (!PageSlab(page)) > > + return true; > > + > > + cachep = page_get_cache(page); > > + slabp = page_get_slab(page); > > + objnr = obj_to_index(cachep, slabp, (void *)ptr); > > + BUG_ON(objnr >= cachep->num); > > + offset = (const char *)ptr - obj_offset(cachep) - > > + (const char *)index_to_obj(cachep, slabp, objnr); > > + if (offset <= obj_size(cachep) && len <= obj_size(cachep) - offset) > > + return true; > > + > > + return false; > > +} > > +EXPORT_SYMBOL(slab_access_ok); > > + > > +/* > > * This initializes kmem_list3 or resizes various caches for all nodes. > > */ > > static int alloc_kmemlist(struct kmem_cache *cachep, gfp_t gfp) > > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2623,6 +2623,34 @@ unsigned int kmem_cache_size(struct kmem_cache *s) > > } > > EXPORT_SYMBOL(kmem_cache_size); > > > > +/* > > + * Returns false if and only if [ptr; ptr+len) touches the slab, > > + * but breaks objects boundaries. It doesn't check whether the > > + * accessed object is actually allocated. > > + */ > > +bool slab_access_ok(const void *ptr, unsigned long len) > > +{ > > + struct page *page; > > + struct kmem_cache *s = NULL; > > No need to assign NULL. Ditto. > > + unsigned long offset; > > + > > + if (len == 0) > > + return true; > > + if (!virt_addr_valid(ptr)) > > + return true; > > + page = virt_to_head_page(ptr); > > + if (!PageSlab(page)) > > + return true; > > + > > + s = page->slab; > > + offset = ((const char *)ptr - (const char *)page_address(page)) % s->size; > > Are the casts necessary? Both are pointers to void * Is it normal kernel style to use void* pointer arithmetic? > > + if (offset <= s->objsize && len <= s->objsize - offset) > > If offset == s->objsize then we access the first byte after the object. Well, then objsize - offset == 0 and len can be 0 only to pass the right part of && check. But (len == 0) case is already handled above. But yes, for better readability it should be "<". Thank you, -- Vasiliy Kulikov http://www.openwall.com - bringing security into open computing environments
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.