|
Message-ID: <adc8516.46d8.18364764f17.Coremail.00107082@163.com>
Date: Thu, 22 Sep 2022 17:10:18 +0800 (CST)
From: 王志强 <00107082@....com>
To: musl@...ts.openwall.com
Cc: dalias@...c.org, "Quentin Rameau" <quinq@...th.space>,
"Florian Weimer" <fweimer@...hat.com>
Subject: Re:Re:Re: Re:Re: The heap memory performance
(malloc/free/realloc) is significantly degraded in musl 1.2 (compared to
1.1)
update:
I change the test code to include malloc_usable_size, just to verify the impact of this function call
#define MAXF 4096
void* tobefree[MAXF];
int main() {
long long i;
int v, k, j;
size_t s, c=0;
char *p;
for (i=0; i<100000000L; i++) {
v = rand();
s = ((v%126)+1)*1024;
p = (char*) malloc(s);
for (j=0; j+1<s; j+=1024) p[j+1]=j; ////<<--poke pages
s = malloc_usable_size(p); ////<<-----------------add here
if (c>=MAXF) {
k = v%c;
free(tobefree[k]);
tobefree[k]=tobefree[--c];
}
tobefree[c++]=p;
}
return 0;
}
# time ./m.alpine
real 7m 19.07s
user 4m 53.61s
sys 2m 25.22s
It took about 440 seconds to finish the whole 100 million iterations (on average 4.4microseconds per malloc&free, while with glibc it took 96 seconds, average 1microsecond) the profiling report is as following
_init__libc_start_main(59.374% 202450/340976)
main(79.104% 160147/202450)
aligned_allocmalloc_usable_size(20.098% 40689/202450)
madvise(33.323% 113623/340976)
aligned_allocmalloc_usable_size(6.534% 22279/340976)
aligned_allocmalloc_usable_size got picked by profiler about (40689+22279)/340976 = 18.4%, should this raise any concern? (Still, madvise calls has major impact, 33.3%)
(The profiling report is different from last few because I add code to poke several memory addresses after malloc)
Here is the profiler report for glibc, running same code, malloc_usable_size got picked only 1% of total 75568 samples
_dl_catch_error??(62.487% 47220/75568)
__libc_start_main(100.000% 47220/47220)
main(80.064% 37806/47220)
cfree(17.274% 8157/47220)
malloc_usable_size(1.355% 640/47220)
pthread_attr_setschedparam?__libc_malloc(36.795% 27805/75568)
My profiler code is here, https://github.com/zq-david-wang/linux-tools/tree/main/perf/profiler, it take a pid as parameter and would profile all the pids within same perf_event cgroup.
At 2022-09-22 11:34:59, "王志强" <00107082@....com> wrote:
Hi Rich,
Thanks for your time.
Totally agreed that in realworld application, it would take way more time to process that huge bulk of memory, compared with the average 3 microsecond per malloc&free.
At 2022-09-22 01:58:17, "Rich Felker" <dalias@...c.org> wrote:
>
>
>> Your test case, with the completely random size distribution across
>> various large sizes, is likely a worst case. The mean size you're
>> allocating is 128k, which is the threshold for direct mmap/munmap of
>> each allocation, so at least half of the allocations you're making can
>> *never* be reused, and will always be immediately unmapped on free. It
>> might be interesting to change the scaling factor from 1k to 256 bytes
>> so that basically all of the allocation sizes are in the >> malloc-managed range.
>
>One observation if this change is made: it looks like at least 70% of
>the time is spent performing madvise(MADV_FREE), and that a large
>portion of the rest (just looking at strace) seems to be repeatedly
>mapping and freeing a 17-page (68k) block, probably because this size
>happens to be at the boundary of some threshold where bounce
>protection isn't happening. I think we should look at both of these in
>more detail, since they both suggest opportunities for large
>performance improvements at low cost.
>
I have made several profiling, the report indeed show that as the size decreased, performance went up significantly and madvise now take major portion of time, as you suggested,
also madviser's portion decrease as size decrease, when average size reach to 2K, madviser was only picked by profiler less than 2%:
1. average 64K(1K~128K) malloc/free
# time ./m.alpine
real 1m 50.12s
user 0m 39.80s
sys 1m 10.17s
madvise(61.945% 52926/85440)
__libc_start_main?(22.158% 18932/85440)
malloc_usable_size?(82.870% 15689/18932)
asm_exc_page_fault(2.766% 434/15689)
main(16.781% 3177/18932)
asm_exc_page_fault(2.487% 79/3177)
malloc_usable_size?(10.969% 9372/85440)
asm_exc_page_fault(6.519% 611/9372)
munmap(2.449% 2092/85440)
exit?(1.540% 1316/85440)
2. average 32K (1K~64K) malloc/free
# time ./m.alpine
real 1m 12.89s
user 0m 30.62s
sys 0m 41.91s
madvise(60.835% 34282/56352)
__libc_start_main?(27.410% 15446/56352)
malloc_usable_size?(78.558% 12134/15446)
main(20.996% 3243/15446)
malloc_usable_size?(9.354% 5271/56352)
exit?(1.888% 1064/56352)
3. average 8K (1K~16K)
# time ./m.alpine
real 0m 42.35s
user 0m 22.94s
sys 0m 19.27s
madvise(49.338% 16169/32772)
__libc_start_main?(36.592% 11992/32772)
malloc_usable_size?(79.244% 9503/11992)
main(20.480% 2456/11992)
malloc_usable_size?(10.921% 3579/32772)
exit?(2.591% 849/32772)
4. average 4K (1k~8K)
# time ./m.debian
real 0m32.477s
user 0m31.829s
sys 0m0.596s
__libc_start_main?(44.474% 9279/20864)
malloc_usable_size?(81.410% 7554/9279)
main(17.987% 1669/9279)
madvise(37.720% 7870/20864)
malloc_usable_size?(13.986% 2918/20864)
exit?(3.350% 699/20864)
5 average 2K(128B~4096B) (madviser only about 1.7%)
# time ./m.alpine
real 0m 13.02s
user 0m 12.68s
sys 0m 0.26s
__libc_start_main?(69.538% 6974/10029)
malloc_usable_size?(80.786% 5634/6974)
main(18.569% 1295/6974)
malloc_usable_size?(21.538% 2160/10029)
exit?(7.060% 708/10029)
madvise(1.715% 172/10029)
6. average 1K (128B~2048B)
# time ./m.alpine
real 0m 10.75s
user 0m 10.68s
sys 0m 0.01s
__libc_start_main?(72.495% 6012/8293)
malloc_usable_size?(76.630% 4607/6012)
main(22.904% 1377/6012)
malloc_usable_size?(18.823% 1561/8293)
exit?(8.610% 714/8293)
David
Content of type "text/html" skipped
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.