|
Message-ID: <87in049em2.fsf@oldenburg2.str.redhat.com> Date: Sat, 08 Dec 2018 17:18:13 +0100 From: Florian Weimer <fweimer@...hat.com> To: Rich Felker <dalias@...c.org> Cc: musl@...ts.openwall.com Subject: Re: aio_cancel segmentation fault for in progress write requests * Rich Felker: > On Fri, Dec 07, 2018 at 09:06:18PM +0100, Florian Weimer wrote: >> * Rich Felker: >> >> > I don't think so. I'm concerned that it's a stack overflow, and that >> > somehow the kernel folks have managed to break the MINSIGSTKSZ ABI. >> >> Probably: >> >> <https://sourceware.org/bugzilla/show_bug.cgi?id=20305> >> <https://sourceware.org/bugzilla/show_bug.cgi?id=22636> >> >> It's a nasty CPU backwards compatibility problem. Some of the >> suggestions I made to work around this are simply wrong; don't take them >> too seriously. >> >> Nowadays, the kernel has a way to disable the %zmm registers, but it >> unfortunately does not reduce the save area size. > > How large is the saved context with the %zmm junk? I measured just > ~768 bytes on normal x86_64 without it, and since 2048 is rounded up > to a whole page (4096), overflow should not happen until the signal > context is something like 3.5k (allowing ~512 bytes for TCB (~128) and > 2 simple call frames). I wrote a test to do some measurements: <https://sourceware.org/ml/libc-alpha/2018-12/msg00271.html> The signal handler context is quite large on x86-64 with AVX-512F, indeed around 3.5 KiB. It is even larger on ppc64 and ppc64el (~4.5 KiB), which I find somewhat surprising. The cancellation test also includes stack usage from the libgcc unwinder. Its stack usage likely differs between versions, so I should have included that in the reported results. Thanks, Florian
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.