Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150522075110.GL17573@brightrain.aerifal.cx>
Date: Fri, 22 May 2015 03:51:10 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: New optimized normal-type mutex?

On Fri, May 22, 2015 at 09:30:48AM +0200, Jens Gustedt wrote:
> > Are these changes worthwhile and worth having additional custom code
> > for the normal-type mutex? I'm not sure. We should probably do an
> > out-of-tree test implementation to measure differences then consider
> > integrating in musl if it helps.
> 
> I think the gain in maintainability and readability would be very
> interesting, so I would even be in favor to apply it if it doesn't
> bring performance gain. I would at least expect it not to have
> performance loss.  Even though this might be a medium sized patch, I
> think it is worth a try. (But perhaps you could go for it one arch at
> a time, to have things go more smoothly?)

There is no per-arch code in the musl pthread implementation (only in
the atomics and syscalls and TLS ABI it relies on), and I don't intend
to add any. Changing it would affect all archs. There shouldn't be any
way it would break on some archs but work on others; if there is, it's
a bug in the atomics.

> But I personally expect it to be a win, in particular for mtx_t, where
> it probably covers 99% of the use cases.

I would expect it to be, but sometimes things that look like wins turn
out not to make any difference. Sharing code paths with owner-tracking
mutex types like we're doing now is probably less code, but on the
other hand, getting rid of the conditionals for type!=normal in those
code paths would make them simpler to understand and more streamlined,
so even from a simplicity standpoint it might be nice to have them
separate.

> > If it does look helpful to musl, it may make sense to use this code as
> > the implementation for __lock/__unlock (implementation-internal locks)
> > and thereby shrink them all from int[2] to int[1], and then have the
> > pthread mutex code tail-call to these functions for the normal-mutex
> > case.
> 
> Actually from a code migration POV I would merely start from this
> end. Implement internal locks that use that strategy, and use them
> internally.

Yes, I agree that's a good approach, especially if it's practical for
the pthread code to tail-call, which I think it is.

> Then, in a second phase start using this lock in data structures that
> are exposed to the user. There will be problems with dynamic linking
> between executables and libc that differ on these strategies, so we'd
> have to be careful.

I'm not sure what you mean. All of the code is in libc.so and all of
the affected structures are opaque and only accessed within libc.so.

> > Even if it's not a performance help for musl, the above design may be
> > very nice as a third-party mutex library since it handles
> > self-synchronized destruction, minimizes spurious futex wakes, and
> > fits the whole lock in a single int.
> 
> "third-party" library is already all the internal stuff where we use
> lock features that are not exposed to the user.

Well, the internal locks in musl are mostly non-interesting to
performance, except for malloc. And they're mostly static-storage, so
don't benefit from the self-synchronized-destruction-safety, except
for the locks in stdio FILEs which are presently not SSD-safe and
relying on malloc behavior to make the possible read-after-fclose
non-dangerous. Fixing those would be nice, but they're owner-tracked
so can't use this.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.