Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231101130033.GR4163@brightrain.aerifal.cx>
Date: Wed, 1 Nov 2023 09:00:33 -0400
From: Rich Felker <dalias@...c.org>
To: Markus Wichmann <nullplan@....net>
Cc: musl@...ts.openwall.com
Subject: Re: synccall patches

On Tue, Oct 31, 2023 at 05:21:01PM +0100, Markus Wichmann wrote:
> Hi all,
> 
> fittingly for Reformation Day, I chose to actually send in some patches
> today, rather than provide comments. And fittingly for Halloween, they
> concern a scary part of musl, namely __synccall().
> 
> First I noticed that the initial return value set in __setxid() is not a
> valid return value for the functions that call it. Those are only
> allowed to return 0 or -1, not 1. That is easily fixed.

That change completely breaks the logic. See the first line of
do_setxid. It will always return failure without ever attempting the
operation -- this is to prevent any other threads from trying the
operation once the first one has failed.

On the other hand, the thing you're worried about, the original value
of c.ret being passed to __syscall_ret, can't happen. If it was
initially positive on entry to do_setxid, a syscall is made and the
return value is stored into c.ret.

> Then I noticed that __synccall() has a logic bug explained in the commit
> message. It wouldn't be a bug if semaphores had any kind of preference
> for the thread that has waited longest or something, but musl does not
> implement something like this. And the kernel explicitly documents that
> FUTEX_WAKE will wake a random thread.

Indeed there is no such contract, and really cannot be. Aside from the
possibility of interruption and restart, the event of starting to wait
on a semaphore is not observable. A thread that hasn't been scheduled
yet that is about to execute sem_wait is indistinguishable from (in
the abstract machine) a thread that's already started waiting.

> At first I thought the best solution would be to forego the serialized
> execution of the callback; just release all threads in line 97 (and then
> fix the callbacks to cope with parallel execution). But BSD semaphores
> have no way to do that, and a mass-release loop like the one on line 110
> would have the same issue as given here. It would only be less likely to
> happen, with the likelihood rising with the number of threads. So a new
> semaphore it is.

Again, there's fundamentally no way to do that with a semaphore for
the above reason -- you can't know that all the threads are waiting
already, even if there were such an interface to perform the +=
operation atomically on the semaphore value.

I think your assessment of the problem is correct and I think your fix
works but I need to look at it in a little more detail. Review from
others familiar with this stuff would be very welcome too.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.