|
Message-ID: <1432279848.11462.45.camel@inria.fr>
Date: Fri, 22 May 2015 09:30:48 +0200
From: Jens Gustedt <jens.gustedt@...ia.fr>
To: musl@...ts.openwall.com
Subject: Re: New optimized normal-type mutex?
Hello,
Am Donnerstag, den 21.05.2015, 19:44 -0400 schrieb Rich Felker:
> I realized (thanks to conversations with wm4) that it's possible to
> optimize normal-type mutexes a lot to eliminate spurious futex wakes
> and reduce the number of atomic ops/barriers under contention.
>
> The concept is to store the full mutex state in a single int:
>
> bits 0-29: waiter count
> bit 30: waiter flag
> bit 31: lock bit
>
> Then unlock is:
>
> old = a_fetch_and(p, 0x3fffffff); // new op
> if (old & 0x40000000) wake(p,1);
>
> Trylock is:
>
> old = *p;
> if (old < 0) {
> a_barrier();
> old = *p;
> }
> if (old >= 0 && a_cas(p, old, old|INT_MIN)) return 0;
>
> Lock is:
>
> while (spins--) {
> if (!trylock()) return 0;
> a_spin();
> }
> while ((old = *p) < 0) {
> new = old+1 | 0x40000000;
> if (a_cas(p, old, new)==old) break;
> }
> for(;;) {
> while ((old = *p) < 0) {
> futex_wait(p, old);
> }
> new = old-1;
> if (new) new |= 0x40000000; // set waiters flag if any other waiters
> new |= INT_MIN;
> if (a_cas(p, old, new)==old) return 0;
> }
>
> The waiter flag marks that unlocking needs to send a wake. It is set
> whenever a new waiter arrives, and when a waiter that consumed a wake
> observes that there are more waiters left to be woken. The flag is
> notably clear when the same thread which just released the mutex
> happens to obtain it again before an existing waiter does (when
> hammering the mutex); this is where we should see huge savings, since
> the subsequent unlock(s) will not produce additional futex wakes which
> (1) incur syscall overhead, and (2) cause additional waiters which
> cannot yet get the mutex to wake up, try to lock it, and go back to
> waiting again. Further:
>
> The code path for unlock has exactly one atomic.
>
> The code path for trylock has at least one barrier and at most (only
> on success) one atomic, but may have a spurious barrier if a stale
> value was initially read and the operation subsequently succeeds.
>
> The code path for lock on contention (trylock failed and thus produced
> no atomic changes, only barriers/spins), contains one atomic cas to
> set the waiter state (versus 2 in the current musl implementation),
> whatever the kernel does in futex, and one more atomic cas to finally
> take the lock (versus 2 in current musl).
>
> Are these changes worthwhile and worth having additional custom code
> for the normal-type mutex? I'm not sure. We should probably do an
> out-of-tree test implementation to measure differences then consider
> integrating in musl if it helps.
I think the gain in maintainability and readability would be very
interesting, so I would even be in favor to apply it if it doesn't
bring performance gain. I would at least expect it not to have
performance loss. Even though this might be a medium sized patch, I
think it is worth a try. (But perhaps you could go for it one arch at
a time, to have things go more smoothly?)
But I personally expect it to be a win, in particular for mtx_t, where
it probably covers 99% of the use cases.
> If it does look helpful to musl, it may make sense to use this code as
> the implementation for __lock/__unlock (implementation-internal locks)
> and thereby shrink them all from int[2] to int[1], and then have the
> pthread mutex code tail-call to these functions for the normal-mutex
> case.
Actually from a code migration POV I would merely start from this
end. Implement internal locks that use that strategy, and use them
internally.
Then, in a second phase start using this lock in data structures that
are exposed to the user. There will be problems with dynamic linking
between executables and libc that differ on these strategies, so we'd
have to be careful.
> Even if it's not a performance help for musl, the above design may be
> very nice as a third-party mutex library since it handles
> self-synchronized destruction, minimizes spurious futex wakes, and
> fits the whole lock in a single int.
"third-party" library is already all the internal stuff where we use
lock features that are not exposed to the user.
Jens
--
:: INRIA Nancy Grand Est ::: Camus ::::::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536 ::
:: :::::::::::::::::::::: gsm France : +33 651400183 ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::
Download attachment "signature.asc" of type "application/pgp-signature" (182 bytes)
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.