|
Message-ID: <20200410154848.GP11469@brightrain.aerifal.cx> Date: Fri, 10 Apr 2020 11:48:48 -0400 From: Rich Felker <dalias@...c.org> To: musl@...ts.openwall.com Subject: Re: "Expected behavior" for mallocng @ low usage On Tue, Apr 07, 2020 at 10:32:29PM -0400, Rich Felker wrote: > I figured as I tune and prepare to integrate the new malloc, it would > be helpful to have a description of what users should see in programs > that make only small use of malloc, since this is the "hard case" to > get right and since any reports of unexpected behavior would be really > useful. For simplicity I'm going to assume 4k pages. > > If the program makes at least one "very small" (<= 108 bytes) > allocation, or at least one allocation of certain other sizes smaller > than the page size, you should expect to see a single-page mmap that's > divided up something like a buddy allocator. At the top level it > consists of 2 2032-byte slots, one of which will be broken into two > 1008-byte slots, one of which will be broken into 2 496-byte slots. > For "very small" sizes, one of these will in turn be broken up into N > equal-sized slots for the requested size class (N=30, 15, 10, 7, 6, 5, > or 4). > > If the page is fully broken up into pairs of 496-byte slots, there are > 8 such slots, and only 7 "very small" size classes, so under "very low > usage", all such objects should fit in the single page, even if you > use a multitude of different sizes. > > For the next 8 size classes (2 doublings) up to 492 bytes, and > depending on divisibility, a group of 2, 3, 5, or 7 slots will be > created in a slot of size 496, 1008, or 2032. These can use the same > page as the above smaller sizes if there's room available. > > Above this size, coarse size classing is used at first (until usage > reaches a threshold) to avoid allocating a large number of many-slot > groups of slightly different sizes that might never be filled. The > next doubling consists only of ranges [493,668] and [669,1004], > allocated in slots of size 2032 in groups of 3 and 2, respectively; > these can use any existing free slot of size 2032. (Once usage has > reached a threshold such that adding a group of 5 or 7 slots doesn't > cause a dramatic relative increase in total usage, finer-grained size > classes will be used.) > > At higher sizes, groups of slots are not allocated inside a larger > slot, but as mmaps consisting of a power-of-two number of pages, which > will be split N ways, initially with N=7, 3, 5, or 2 depending on > divisibility. As usage increases, so does the N (doubling the number > of pages used) which reduces potential for vm space fragmentation and > increases the number of slots that can be allocated/freed with fast > paths manipulating free masks. > > At present, coarse size classing is used for all these at first, which > can result in significant "waste" but avoids preallocating large (5 or > 7) counts of slots that might not ever be used. This is what I'm > presently working to improve by allowing direct individual mmaps in > cases where they can be efficient. > > Changes in this area are likely coming soon. Main thing I'm trying to > solve still is getting eagar allocation down even further so that > small programs don't grow significantly when switching to mallocng. I've made some changes that should improve this significantly: https://github.com/richfelker/mallocng-draft/commits/2b676e6859ca241b62ed133fb1a8828509af29bf For the "higher sizes" above, the minimums of 7, 3, 5, or 2 are reduced to 3 or 5, 1 or 2, 3, or 1, respectively, conditional on size being sufficient that these lower minumum counts approximate an integral number of pages (the requirement that it be a power-of-two number of pages is relaxed). In particular with coarse size classing, the initial/minimal count will never be more than 2, and will always be 1 if the slot size is at least 4 pages minus epsilon. Also, individual mmap servicing of allocations in this range is now possible. This differs from "just allocating a group with count 1", in that, if the requested size was significantly less than the slot size, only the requested part (rounded up to whole pages) is mapped. Such allocations can't be kept and reused, since they might not suffice for a future allocation in the same size class. For this reason, individual mmap shouldn't be used for sizes where "unmap and remap" is likely to dominate time cost. They also shouldn't be used except a in a small bounded number of instances to avoid vm space fragmentation (and hitting vma limit). Details of how these conditions are implemented can be seen in the source.
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.