|
Message-ID: <1407912210.4951.15.camel@eris.loria.fr>
Date: Wed, 13 Aug 2014 08:43:30 +0200
From: Jens Gustedt <jens.gustedt@...ia.fr>
To: musl@...ts.openwall.com
Subject: Re: Explaining cond var destroy [Re: C threads, v3.0]
Am Dienstag, den 12.08.2014, 17:18 -0400 schrieb Rich Felker:
> On Tue, Aug 12, 2014 at 09:09:21PM +0200, Jens Gustedt wrote:
> > Rich,
> > thanks a lot for looking into the code.
> >
> > Am Dienstag, den 12.08.2014, 12:01 -0400 schrieb Rich Felker:
> > > As far as I can tell, the only thing that's saving you from sending
> > > futex wakes after free is that you're just using spinlocks.
> >
> > No, I don't think so. These protect critical sections at the beginning
> > of the cnd_t calls. The cnd_*wait calls hold the mutex at that time
> > anyhow, so even if these would be implemented with mutexes (an extra
> > one per cnd_t to protect the critical section) this wouldn't cause
> > late wakes, I think.
>
> I was talking about the unref-and-free code that's using spinlocks.
I don't understand, the unref-and-free code doesn't use spinlocks. It
just uses an atomic_fetch_sub of 1 to determine if it is the last
user. If it is not, it does nothing and returns.
(This is indeed supposing that atomic_fetch_sub is "lock-free" in the
sense of the C standard, which basically means that it has no
observable state from a signal handler. All architectures I know of
have that property, but my knowledge is limited.)
> If
> it were using mutexes that don't protect against making futex wake
> calls after the atomic unlock, a previous unref could send the wake
> after the final one freed the object. So in effect, if you use a mutex
> here, I think the wake-after-free issue has just been moved to a
> different object, not solved.
>
> > > This is an
> > > extremely expensive solution: While contention is rare, as soon as you
> > > do hit contention, if there are many threads they all pile on and
> > > start spinning, and the time to obtain a lock (and cpu time/energy
> > > spent waiting) grows extremely high. And of course it becomes infinite
> > > if you have any threads of differing priorities and the low-priority
> > > thread has the lock...
> >
> > I think you dramatize a bit :)
>
> Perhaps. :)
>
> > It is very unlikely that a thread that reaches the critical section is
> > unscheduled *during* that critical section. If it is unscheduled, you
> > are right, the wait can be long. But that event is very unlikely, so
> > the average time inside the critical section is still short, with a
> > probability distribution that is a bit skewed because of the
> > outliers.
>
> Yes. The general pathology of spinlocks is that they give extremely
> high latency and cpu load in an extremely low probability worst-case.
>
> > (And then there is no concept of different scheduling priorities for C
> > threads, all of them are equal.)
>
> Indeed, but there's no reason these functions couldn't end up getting
> called from a POSIX program using a C11 library. This is the normal
> expected usage for mutexes (i.e. you're writing a library that needs
> to be thread-safe but you don't want to depend on POSIX -- in practice
> the calling application is unlikely to be using C11 thrd_create
> because it sucks :) and perhaps less likely but definitely not
> impossible for cond vars.
Hm, C threads are meant primarily as a portable and simple user space
tool and as a means to provide a model of parallelism as far as the C
standard is concerned. And that model is flat and has no hierarchy
among threads.
Jens
--
:: INRIA Nancy Grand Est ::: AlGorille ::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536 ::
:: :::::::::::::::::::::: gsm France : +33 651400183 ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::
Download attachment "signature.asc" of type "application/pgp-signature" (199 bytes)
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.