Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140827074310.GK12888@brightrain.aerifal.cx>
Date: Wed, 27 Aug 2014 03:43:10 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: sem_getvalue conformance considerations

On Wed, Aug 27, 2014 at 09:05:41AM +0200, Jens Gustedt wrote:
> Am Dienstag, den 26.08.2014, 22:33 -0400 schrieb Rich Felker:
> > What if we try to get fancy and subtract waiters from __val[0]?
> > Unfortunately we can't necessarily read __val[0] and waiters
> > (__val[1]) atomically together,
> 
> Doing the correct thing is always fancy :)
> Sure that this depends on the architecture, but where this is possible
> we should just do that, this is the semantically correct value.
> 
> On i386 and follow ups 64bit atomic read should always be possible,
> and if I remember correctly the arm arch that I touched once had such
> a thing, too.

Yes, I'm aware that 64-bit atomic read may exist on some archs (note:
this does not include i386; 8-byte atomic read was not possible until
at least i586 generation and our "i386" baseline is really "i486", the
first model with cmpxchg, which is mandatory for working pthread
primitives), but as one of musl's big general principles is providing
uniform behavior across archs, I'd rather not implement something
where the behavior is going to differ like that based on a feature.

> > so it's possible that one is outdated
> > by the time we read the other, such that the resulting difference is
> > not the correct formal semaphore value at any time during the
> > sem_getvalue call.
> 
> On arch where atomic read of these two values together is not
> possible, this is the best approximation that you can get. On these
> archs there is simply no precise moment in time for that feature
> because the sequence points are not synchronized between the different
> threads. Nobody can ask you to return an exact value for a concept
> that is not well defined.

I'm not entirely convinced there's not a solution. There may be
sufficient information to determine whether or not there are waiters
without a 64-bit atomic read.

Let V be the implementation semaphore value (__val[0]) and W the
waiter count (__val[1]).

After observing a nonzero V, W cannot increase without V first
reaching zero. So if we read V first, then W, the value of W read will
be less than or equal to the value of W at the time V was read. This
seems to be sufficient for the semantics I thought were right.

However, I'm doubtful of them too. :-)

Even if we know the number of waiters exactly at the time the value is
read, that's not sufficient to assign a formal value to the semaphore,
because these waiters could race to return EINTR or ETIMEDOUT, or act
upon cancellation, before they consume the post. In this case,
sem_getvalue would have reported an observably incorrect value:

Example: Initially 2 waiters, posting thread posts 3 times, calls
sem_getvalue and sees a value of 1, calls pthread_cancel on both
waiters, then calls sem_getvalue again and sees a value of 3, despite
no additional posts having happened.

The only easy way around this problem is the current behavior: having
sem_getvalue treat waiters as not-having-arrived-yet.

The other solution I see, which would allow sem_getvalue to report
waiters, would be to ensure that waiters always do a final sem_trywait
after observing an error, and ignore the error if the trywait
succeeds. However doing this with cancellation is not easy; it would
require a longjmp, which would require adding setjmp overhead to each
sem_wait. Of course if __timedwait could return ECANCELED rather than
invoking cancellation handlers, that would make things a lot nicer,
and it's something I've wanted to be able to do for a long time, so
perhaps we can revisit this issue once that's implemented... :)

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.