Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171221233402.GZ1627@brightrain.aerifal.cx>
Date: Thu, 21 Dec 2017 18:34:02 -0500
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: [PATCH 1/8] (V2) a new lock algorithm with lock value and
 CS counts in the same atomic int

On Thu, Dec 21, 2017 at 12:06:55PM +0100, Jens Gustedt wrote:
> Hello Rich,
> 
> On Wed, 20 Dec 2017 16:58:27 -0500, Rich Felker wrote:
> 
> > On Fri, Jun 16, 2017 at 09:11:35AM +0200, Jens Gustedt wrote:
> > > A variant of this new lock algorithm has been presented at SAC'16,
> > > see https://hal.inria.fr/hal-01304108. A full version of that paper
> > > is available at https://hal.inria.fr/hal-01236734.  
> > 
> > Finally getting back to this, and trying to finish it in the current
> > (1.1.19) release cycle...
> 
> great, I appreciate
> 
> since also for me this is half a year ago I have to wrap my head
> around this and look into the details, again.

Sure, no problem.

> > >  void __lock(volatile int *l)
> > >  {
> > > -	if (libc.threads_minus_1)
> > > -		while (a_swap(l, 1)) __wait(l, l+1, 1, 1);
> > > +	if (!libc.threads_minus_1) return;
> > > +        /* fast path: INT_MIN for holding the lock, +1 to count
> > > this
> > > +           thread in the critical section. */  
> > 
> > There seems to be some mixing of tabs and spaces here and in a few
> > other lines. Tabs should be used for indentation levels, spaces (after
> > any initial indent) to column-align. Generally musl style also has
> > continuation comment lines begin with a * aligned with the * of the
> > opening /* too.
> 
> Since I ran into similar comments before and I find it quite
> frustrating and disruptive:

By similar comments do you mean existing comments in the musl source
styled like you did it, or similar email comments before about styling
matters?

> musl style is just not what my editor is tuned to, and I wouldn't
> change that, just for musl.
> 
> Is there a tool (and command line arguments) that you could recommend,
> and that one could just run before submitting any patches?
> 
> In my own projects I usually use astyle, which does a pretty good
> job. Would you be able to release a list of astyle (or indent or
> whatever) arguments that format code in a style accepted for musl?

I'm not familiar with these tools, so I could look into it, but
someone else interested might be able to get to it before me. I'll
see. The way I deal with touching code in lots of different projects
with different styles is having my editor configured to copy exactly
the leading whitespace of the previous line when adding a new line,
and then adding or deleting tabs or spaces as appropriate. But I can
see how that's not agreeable to everyone.

Actually now that I think about it, I'm pretty sure the official Linux
kernel style is nearly the same as preferred style in musl, and should
usually produce formatting that's perfectly acceptable.

> > As long as you're including comments, which I think makes sense given
> > that the algorithm is somewhat nontrivial, it might be nice to add a
> > quick exposition on the states.
> 
> yes, very good idea
> 
> > As I understand them:
> > 
> > - v==0 unlocked, no waiters
> > - v>=0 unlocked, with v waiters contending  
> > - v<0  locked, with v-INT_MIN-1 waiters
> 
> sort of, see below
> 
> > One thing I don't recall understanding is why the lock owner is
> > "included" in the waiter count, i.e. why v<0 indicates v-INT_MIN-1
> > waiters rather than v-INT_MIN waiters. Is there some race it's
> > avoiding?
> 
> yes.
> 
> The counter effectively is a congestion counter, not a wait counter,
> that is it counts all threads that are inside the critical section.
> 
> This POV avoids a race around sending waiters to sleep: now that the
> 'int' of the futex contains all the state, if this would be a wait
> count, waiters that want to go into sleep would have to change the
> very state to which the sleep request is atomically bound.

I see. I actually had in mind counting threads that were spinning but
not yet in the futex wait as "waiters", so in that case I think the
only difference between what you're doing and what I had in mind as an
alternative is that the thread which owns the lock still gets counted
even after acquiring it rather than subtracting itself out at the same
time as acquiring the lock. But I don't think there's any perfomance
or safety/lock-properties difference either way; it seems to just be a
notational difference.

> As a consequence, under high congestion, many futex_wait calls could
> fail because the 'int' changes too often. There is even potential for
> some sort of ping-pong where threads hammer on the wait counter and
> are never able to fall asleep because everybody is shouting their
> "good night" - "good morning" messages in a loop.
> 
> Using the counter as a congestion counter avoids that. The fact that a
> thread on the slow path wants to sleep doesn't change the state, and
> thus the futex_wait calls have better chances to succeed. And at the
> other end the unlock function can operate on that effectively, because
> we can change the lock bit and the counter in one atomic operation.
> 
> If you are interested, in the paper that I mentioned is a proof that
> such a system eventually calms down.
> 
> In summary, with a congestion count n we have at most 3xn changes to
> the atomic state in total. With a wait count we have at least as much
> with no upper bound.

Yes, this is a very nice property. I think I actually read the paper
or something preliminary you'd written up on it previously (when you
first proposed the new lock) but it's been so long that I may be
misremembering.

> > Overall I think it looks pretty much good to go, aside from minor
> > style things. I'm open to your ideas on __futexwait, maybe just
> > writing it inline here since it's only used in one place; while I
> > don't like duplicating what __wait already does, __wait is mildly big
> > and using your __futexwait version instead will probably offset the
> > full size increase of the new lock (and maybe more) in programs that
> > have no reason other than __lock to pull in __wait.
> 
> I'look into all of that, but I might need some days to do so.

The more I think about it the more I'm leaning towards avoiding musl's
existing __wait function here, like what you're doing, and maybe even
phasing out the current __wait and __timedwait since they seem to have
bad worst-case performance properties (unbounded changing of shared
memory under contention). Moving the waiter counting out to the
calling lock function seems like the right way to take things. So for
now it's just a question of whether to keep __futexwait as a function
like you did, and if we're going to use it more in the future, the
answer is probably yes.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.