|
Message-ID: <20210309150320.GU32655@brightrain.aerifal.cx> Date: Tue, 9 Mar 2021 10:03:22 -0500 From: Rich Felker <dalias@...c.org> To: Alexander Monakov <amonakov@...ras.ru> Cc: musl@...ts.openwall.com, Érico Nogueira <ericonr@...root.org> Subject: Re: [PATCH v2] add qsort_r. On Tue, Mar 09, 2021 at 05:13:39PM +0300, Alexander Monakov wrote: > > On Tue, Mar 09, 2021 at 12:11:37PM +0300, Alexander Monakov wrote: > > > On Tue, 9 Mar 2021, Érico Nogueira wrote: > > > > > > > since most discussion around the addition of this function has centered > > > > around the possible code duplication it requires or that qsort would > > > > become much slower if implemented as a wrapper around qsort_r > > > > > > How much is "much slower", did anyone provide figures to support this claim? > > > The extra cost that a wrapper brings is either one indirect jump instruction, > > > or one trivially-predictable conditional branch per one comparator invocation. > > > > Quite a bit I'd expect. Each call to cmp would involve an extra level > > of call wrapper. With full IPA/inlining it could be optimized out, but > > only by making a non-_r copy of all the qsort code in the process at > > optimize time. > > > > > Constant factor in musl qsort is quite high, I'd be surprised if the extra > > > overhead from one additional branch is even possible to measure. > > > > I don't think it's just a branch. It's a call layer. qsort_r internals > > with cmp=wrapper_cmp, ctx=real_cmp -> wrapper_cmp(x, y, real_cmp) -> > > real_cmp(x, y). But I'm not opposed to looking at some numbers if you > > think it might not matter. Maybe because it's a tail call it does > > collapse to essentially just a branch in terms of cost.. > > First of all it's not necessarily a "call layer". > > You could change cmp call site such that NULL comparator implies that > non-_r version was called and the original comparator address is in ctx: > > static inline int call_cmp(void *v1, void *v2, void *ctx, cmpfun cmp) > { > if (cmp) > return cmp(v1, v2, ctx); > return ((cmpfun)ctx)(v1, v2); > } > > This is just a conditional branch at call site after trivial inlining. This works, but it's not what I would call writing qsort as a wrapper around qsort_r, because it depends on qsort_r having this additional libc-internal contract to treat null cmp specially, and it might be undesirable because it then does something rather awful if the application calls qsort_r with a null cmp pointer (rather than just crashing with PC=0). > Second, if you make a "conventional" wrapper, then on popular architectures > it is a single instruction (powerpc64 ABI demonstrates its insanity here): > > static int wrapper_cmp(void *v1, void *v2, void *ctx) > { > return ((cmpfun)ctx)(v1, v2); > } > > Some examples: > > amd64: jmp %rdx > i386: jmp *12(%esp) > arm: bx r2 > aarch64:br x2 > > How is this not obvious? Now that you mention it it's obvious that the compiler should be able to do this. gcc -Os alone does not, which looks like yet another reason to nuke -Os, but I think in musl it would do the right thing already. It turns out the problem is that gcc emits spurious frame pointer setup and teardown without -fomit-frame-pointer. For some reason though it's gigantic on powerpc64. It fails to do a tail call at all... Rich
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.