Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150730134649.GC16376@brightrain.aerifal.cx>
Date: Thu, 30 Jul 2015 09:46:49 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: New optimized normal-type mutex?

On Thu, Jul 30, 2015 at 02:37:13PM +0300, Alexander Monakov wrote:
> On Thu, 30 Jul 2015, Jens Gustedt wrote:
> > Am Donnerstag, den 30.07.2015, 12:36 +0300 schrieb Alexander Monakov:
> > > That sounds like your testcase simulates a load where you'd be better off with
> > > a spinlock in the first place, no?
> > 
> > Hm, this is not a "testcase" in the sense that this is the real code
> > that I'd like to use for the generic atomic lock-full stuff. My test
> > is just using this atomic lock-full thing, with a lot of threads that
> > use the same head of a "lock-free" FIFO implementation. There the
> > inner part in the critical section is just memcpy of some bytes. For
> > reasonable uses of atomics this should be about 16 to 32 bytes that
> > are copied.
> > 
> > So this is really a use case that I consider important, and that I
> > would like to see implemented with similar performance.
> 
> I acknowledge that that seems like an important case, but you have not
> addressed my main point.  With so little work in the critical section, it does
> not make sense to me that you would use something like a normal-type futex-y
> mutex.  Even a call/return to grab it gives you some overhead.  I'd expect you
> would use a fully inlined spinlock acquisition/release around the memory copy.

No, spinlocks are completely unusable in a POSIX libc that implements
priorities. They will deadlock whenever a lower-priority thread gets
preempted by a higher-priority one while holding the lock.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.