Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140826202643.GI12888@brightrain.aerifal.cx>
Date: Tue, 26 Aug 2014 16:26:43 -0400
From: Rich Felker <dalias@...c.org>
To: Jens Gustedt <jens.gustedt@...ia.fr>
Cc: musl@...ts.openwall.com
Subject: Re: Multi-threaded performance progress

On Tue, Aug 26, 2014 at 09:34:13PM +0200, Jens Gustedt wrote:
> Am Dienstag, den 26.08.2014, 15:05 -0400 schrieb Rich Felker:
> > On Tue, Aug 26, 2014 at 08:30:39PM +0200, Jens Gustedt wrote:
> > > Or do you mean that I should use an atomic store at the other end?
> >
> > Yes. With an atomic store at the other end, I think it could be
> > correct, but I'd need to review it further to be sure.
> 
> ok, it shouldn't be difficult to use atomic ops, then.

Based on what you've said below, though, I think there's still a big
problem..

> > > > Note that the performance of the code in which you're trying to avoid
> > > > the lock does not matter in the slightest except when a race happens
> > > > between a thread acting on cancellation or timeout and a signaler
> > > > (since that's the only time it runs). I expect this is extremely rare,
> > > > so unless we determine otherwise, I'd rather not add complexity here.
> > > 
> > > If we have a broadcaster working a long list of waiters, this might
> > > still happen sufficiently often. And the "complexity" is hidden in the
> > > execution pattern of the current version, where control and handling
> > > of the list alternates between different threads, potentially as many
> > > times as there are waiters in the list.
> > 
> > Does your code eliminate that all the time? If so it's more
> > interesting. I'll look at it more.
> 
> Yes, it will be the signaling or broadcasting thread that will be
> working on the integrity of the list while it is holding the lock. At
> the end those that it detected to be leaving will be isolated list
> items, those that are to be signaled form one consecutive list that is
> detached from the cv, and the ones that remain (in the signaling case)
> form a valid cv-with-list.
> 
> The only small gap that remains (and that annoys me) is the leaving
> thread that sneaks in
> 
>  - marks itself as leaving before the end of the  the CS
>  - only asks for _c_lock *after* the signaling thread has left its CS
> 
> This is all our problem of late access to the cv variable revisited,
> but here it is condensed in a very narrow time frame. Both threads
> must be active for this to happen, so my hope is that when both are
> spinning for some time on the  c_lock for the waiter and on ref for
> the signaler, none of them will "ever" be forced into a futex wait.

That's a bug thast needs to be fixed to go forward with this, since
it's essentially a use-after-free. Now that you mention it, avoiding
use-after-free was one of my main motivations for having such waiters
synchronize with the signaling thread. That should have been
documented in a comment somewhere, but the point seems to have slipped
my mind sometime between the design phase and writing the code and
comments.

Do you see any solution whereby a waiter that's waking up can know
reliably, without accessing the cv, whether a signaling thread is
there to take responsibility for removing it from the list? I'm not
seeing any solution to that problem.

I'm also still skeptical that there's a problem to be solved here; for
it to matter, the incidence of such races needs to be pretty high, I
think. Perhaps if you think it's going to matter you could work on a
test case that shows performance problems under load with lots of
timedwait expirations (or cancellations, but I think worry about
cancellation performance is somewhat silly to begin with). Or, if you
don't have time to spend on side projects like test cases, maybe
someone else could test it?

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.