|
Message-ID: <20181108020445.GZ5150@brightrain.aerifal.cx> Date: Wed, 7 Nov 2018 21:04:45 -0500 From: Rich Felker <dalias@...c.org> To: musl@...ts.openwall.com Subject: Re: printf family handling of INT_MAX +1 tested on aarch64 On Wed, Nov 07, 2018 at 02:54:02PM -0600, CM Graff wrote: > RIch, > It just produces a segfault on debian aarch64 in my test case. Whereas > INTMAX + 2 does not. So I thought it worth reporting. > > graff@...b-debian-arm:~/hlibc-test/tests-emperical/musl$ > ../usr/bin/musl-gcc ../printf_overflow.c > graff@...b-debian-arm:~/hlibc-test/tests-emperical/musl$ > ../usr/bin/musl-gcc -static ../printf_overflow.c > graff@...b-debian-arm:~/hlibc-test/tests-emperical/musl$ ./a.out > logfile > Segmentation fault > graff@...b-debian-arm:~/hlibc-test/tests-emperical/musl$ uname -a > Linux hlib-debian-arm 4.9.0-8-arm64 #1 SMP Debian 4.9.110-3+deb9u6 > (2018-10-08) aarch64 GNU/Linux > graff@...b-debian-arm:~/hlibc-test/tests-emperical/musl$ > > I can supply access to the 96 core 124 GB RAM aarch64 debian test box > if it would help reproduce the segfault. Just email me a public key if > you want access. The failure has nothing to do with printf. You're calling malloc(i) then writing to s[i], which is one past the end of the allocated buffer. I failed to notice this because you're only writing i-1 A's to the buffer, and there already happens to be a nul byte at s[i-1] to terminate them. Actually the crash has nothing to do with aarch64 vs x86_64 but rather static vs dynamic linking. With dynamic linking, full malloc is used and there happens to be padding space at the end of the allocation because there was a header at the beginning and it has to be rounded up to whole pages. But with static linking, simple_malloc (a bump allocator) was used, and there are exactly i bytes in the allocation. Fix the s[i]=0 to be s[i-1]=0 instead and the test works as expected. And please, when reporting crashes like this, at least try to identify where the crash is occurring (e.g. with gdb or even just some trivial printf debugging). Rich
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.