Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20191206050642.GD16318@brightrain.aerifal.cx>
Date: Fri, 6 Dec 2019 00:06:42 -0500
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: New malloc - first preview

On Sat, Nov 30, 2019 at 05:11:50PM -0500, Rich Felker wrote:
> On Thu, Nov 28, 2019 at 04:56:42PM -0500, Rich Felker wrote:
> > Work on the new malloc is well underway, and I have a draft version
> > now public at:
> > 
> > https://github.com/richfelker/mallocng-draft
> > 
> > Some highlights:
> 
> And some updates:
> 
> [...]
> 
> Strategy for creating new groups and how soon to reuse freed memory
> probably still has a lot of suboptimal properties, but I think the new
> allocator is usable/testable at this point.

This has been improved a lot, to the point that it's near what I'd
call finished now. Logic is now in place to limit early growth of the
heap, so that a small number of allocations in a new size class don't
cause eagar allocation of largeish groups. The main change I still
want to make here is allowing direct mmap of objects well below
MMAP_THRESHOLD (~128k), likely all the way down to ~8x PAGESIZE (where
waste from page alignment is bounded by 12.5%). With 4k pages, that's
the last 8 (of 48) size classes.

In some sense, we could just have MMAP_THRESHOLD be smaller, but (1)
that breaks archs with large pages, and (2) it likely leads to bad
fragmentation when you have large numbers of such objects. Instead,
what I'd like to do is simply allow single-slot groups for these size
classes, mapping only as many pages as they need (as opposed to
rounding up to size class; like for large mmap-serviced allocations),
but still accounting usage so that we can choose to allocate large
groups once usage is high.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.