Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180416223918.GA3094@brightrain.aerifal.cx>
Date: Mon, 16 Apr 2018 18:39:18 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: [PATCH 2/2] ldso, malloc: implement reclaim_gaps via
 __malloc_donate

On Mon, Apr 16, 2018 at 08:54:36PM +0300, Alexander Monakov wrote:
> Split 'free' into unmap_chunk and bin_chunk, use the latter to introduce
> __malloc_donate and use it in reclaim_gaps instead of calling 'free'.
> ---
> v2:
> * drop attribute-cold annotation
> * adjust __malloc_donate:
> 	 * drop comparison against MMAP_THRESHOLD
> 	 * check donated chunk size just once
> 
>  ldso/dynlink.c      | 15 +++-----------
>  src/malloc/malloc.c | 60 +++++++++++++++++++++++++++++++++++++----------------
>  2 files changed, 45 insertions(+), 30 deletions(-)
> 
> [...]
> 
> +void __malloc_donate(char *start, char *end)
> +{
> +	ssize_t align_start_up = (SIZE_ALIGN - 1) & -(uintptr_t)start;
> +	ssize_t align_end_down = (SIZE_ALIGN - 1) &  (uintptr_t)end;
> +	ssize_t chunk_size = end - start - (OVERHEAD + align_start_up + align_end_down);
> +	if (chunk_size < OVERHEAD + SIZE_ALIGN) return;
> +	start += align_start_up;
> +	end   -= align_end_down;
> +
> +	struct chunk *c = MEM_TO_CHUNK(start + OVERHEAD), *n = MEM_TO_CHUNK(end);
> +	c->psize = n->csize = C_INUSE;
> +	c->csize = n->psize = C_INUSE | chunk_size;
> +	bin_chunk(c);
> +}

I think this version of the size logic is harder to read than the old
one, and inconsistent with how malloc does accounting internally. In
the notation used everywhere else, "chunk size" always includes
OVERHEAD plus the usable space; it's the distance between the chunk
header and the next chunk header.

I also don't like use of signed arithmetic with sizes. I thought all
the values were nonnegative anyway, so I wasn't sure why you
introduced signed types, but apparently chunk_size can be negative and
the comparison against OVERHEAD+SIZE_ALIGN relies on the RHS being
signed (and on the unsigned result of the expression initializing
chunk_size getting coerced to signed) to work.

I think the above code may also be wrong. start is aligned mod
SIZE_ALIGN, so start+OVERHEAD is misaligned, and therefore not a valid
argument to MEM_TO_CHUNK. Continuing further along this line of
reasoning, aligning start up mod SIZE_ALIGN like you're doing is not
sufficient. There must also be space for a header below the aligned
point.

What about the following version, based on your original?

	// Ensure that adjustment can't move start past end.
	if (end-start < SIZE_ALIGN + OVERFLOW) return;

	// Reserve space for header below start and then align.
	start += OVERHEAD + SIZE_ALIGN - 1;
	start -= (uintptr_t)start & (SIZE_ALIGN - 1);
	end -= (uintptr_t)end & (SIZE_ALIGN - 1);

	// At this point, end-start is a multiple of SIZE_ALIGN,
	// and is thus a valid size unless it's zero.
	if (start == end) return;

	// Setup chunk headers and bin the new free chunk.
	struct chunk *c = MEM_TO_CHUNK(start), *n = MEM_TO_CHUNK(end);
	c->psize = n->csize = C_INUSE;
	c->csize = n->psize = C_INUSE | chunk_size;
	bin_chunk(c);

I don't see any clean way to write the first size check such that it
encompasses the second one, and trying to wait til the second one to
do any checking risks overflow issues that move end past start and
thus give wrong results when computing end-start. That could be
avoided by checking end-start<PTRDIFF_MAX explicitly, but then we
reintroduce 2 branches and it seems just as expensive, but less
obvious, than doing it the above way.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.