Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1409077839.8054.54.camel@eris.loria.fr>
Date: Tue, 26 Aug 2014 20:30:39 +0200
From: Jens Gustedt <jens.gustedt@...ia.fr>
To: musl@...ts.openwall.com
Subject: Re: Multi-threaded performance progress

Am Dienstag, den 26.08.2014, 13:53 -0400 schrieb Rich Felker:
> I should have bothered to check and see that there was a patch before
> responding...
> 
> On Tue, Aug 26, 2014 at 06:35:19PM +0200, Jens Gustedt wrote:
> > diff --git a/src/thread/pthread_cond_timedwait.c b/src/thread/pthread_cond_timedwait.c
> > index 2d192b0..136fa6a 100644
> > --- a/src/thread/pthread_cond_timedwait.c
> > +++ b/src/thread/pthread_cond_timedwait.c
> > @@ -42,12 +42,32 @@ static inline void lock(volatile int *l)
> >  	}
> >  }
> >  
> > +/* Avoid taking the lock if we know it isn't necessary. */
> > +static inline int lockRace(volatile int *l, int*volatile* notifier)
> > +{
> > +	int ret = 1;
> > +	if (!*notifier && (ret = a_cas(l, 0, 1))) {
> > +		a_cas(l, 1, 2);
> > +		do __wait(l, 0, 2, 1);
> > +		while (!*notifier && (ret = a_cas(l, 0, 2)));
> > +	}
> > +	return ret;
> > +}
> 
> This code was confusing at first, and technically *notifier is
> accessed without synchronization -- it may be written by one thread
> and read by another without any intervening barrier. I suspect it
> works, but I'd like to avoid it. But it does answer my question about
> how you were detecting the case of "don't need to lock".

Hm, I don't see the point of your remark, musl is doing such "atomic"
reads all over, no? There is no macro for atomic load, only for atomic
store, otherwise I would have used that. And notice that *notifier is
volatile, so the load can't be optimized out.

Or do you mean that I should use an atomic store at the other end?

> Note that the performance of the code in which you're trying to avoid
> the lock does not matter in the slightest except when a race happens
> between a thread acting on cancellation or timeout and a signaler
> (since that's the only time it runs). I expect this is extremely rare,
> so unless we determine otherwise, I'd rather not add complexity here.

If we have a broadcaster working a long list of waiters, this might
still happen sufficiently often. And the "complexity" is hidden in the
execution pattern of the current version, where control and handling
of the list alternates between different threads, potentially as many
times as there are waiters in the list.

> >  int pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restrict m, const struct timespec *restrict ts)
> >  {
> > -	struct waiter node = { .cond = c, .mutex = m };
> > +	struct waiter node = { .cond = c, .mutex = m, .state = WAITING, .barrier = 2 };
> 
> This may slightly pessimize process-shared cond vars, but seems ok.
> 
> > +static inline int cond_signal (struct waiter * p, int* ref)
> > +{
> > +	int ret = a_cas(&p->state, WAITING, SIGNALED);
> > +	if (ret != WAITING) {
> > +		ref[0]++;
> > +		p->notify = ref;
> > +		if (p->prev) p->prev->next = p->next;
> > +		if (p->next) p->next->prev = p->prev;
> > +		p->next = 0;
> > +		p->prev = 0;
> > +	}
> > +	return ret;
> > +}
> > +
> >  int __private_cond_signal(pthread_cond_t *c, int n)
> >  {
> > -	struct waiter *p, *first=0;
> > -	int ref = 0, cur;
> > +	struct waiter *p, *prev, *first=0;
> > +	int ref[2] = { 0 }, cur;
> >  
> > -	lock(&c->_c_lock);
> > -	for (p=c->_c_tail; n && p; p=p->prev) {
> > -		if (a_cas(&p->state, WAITING, SIGNALED) != WAITING) {
> > -			ref++;
> > -			p->notify = &ref;
> > +	if (n == 1) {
> > +		lock(&c->_c_lock);
> > +		for (p=c->_c_tail; p; p=prev) {
> > +			prev = p->prev;
> > +			if (!cond_signal(p, ref)) {
> > +				first=p;
> > +				p=prev;
> > +				first->prev = 0;
> > +				break;
> > +			}
> > +		}
> > +		/* Split the list, leaving any remainder on the cv. */
> > +		if (p) {
> > +			p->next = 0;
> >  		} else {
> > -			n--;
> > -			if (!first) first=p;
> > +			c->_c_head = 0;
> >  		}
> > -	}
> > -	/* Split the list, leaving any remainder on the cv. */
> > -	if (p) {
> > -		if (p->next) p->next->prev = 0;
> > -		p->next = 0;
> > +		c->_c_tail = p;
> > +		unlockRace(&c->_c_lock, ref[0]);
> >  	} else {
> > -		c->_c_head = 0;
> > +		lock(&c->_c_lock);
> > +                struct waiter * head = c->_c_head;
> > +		if (head) {
> > +			/* Signal head and tail first to reduce possible
> > +			 * races for the cv to the beginning of the
> > +			 * processing. */
> > +			int headrace = cond_signal(head, ref);
> > +			struct waiter * tail = c->_c_tail;
> > +			p=tail->prev;
> > +			if (tail != head) {
> > +				if (!cond_signal(tail, ref)) first=tail;
> > +				else while (p != head) {
> > +					prev = p->prev;
> > +					if (!cond_signal(p, ref)) {
> > +						first=p;
> > +						p=prev;
> > +						break;
> > +					}
> > +					p=prev;
> > +				}
> > +			}
> > +			if (!first && !headrace) first = head;
> > +			c->_c_head = 0;
> > +			c->_c_tail = 0;
> > +			/* Now process the inner part of the list. */
> > +			if (p) {
> > +				while (p != head) {
> > +					prev = p->prev;
> > +					cond_signal(p, ref);
> > +					p=prev;
> > +				}
> > +			}
> > +		}
> > +		unlockRace(&c->_c_lock, ref[0]);
> >  	}
> > -	c->_c_tail = p;
> > -	unlock(&c->_c_lock);
> >  
> >  	/* Wait for any waiters in the LEAVING state to remove
> >  	 * themselves from the list before returning or allowing
> >  	 * signaled threads to proceed. */
> > -	while ((cur = ref)) __wait(&ref, 0, cur, 1);
> > +	while ((cur = ref[0])) __wait(&ref[0], &ref[1], cur, 1);
> >  
> >  	/* Allow first signaled waiter, if any, to proceed. */
> >  	if (first) unlock(&first->barrier);
> 
> This is sufficiently more complex that I think we'd need evidence that
> the races actually adversely affect performance for it to be
> justified.

I can imagine that seen as it is here, this looks invasive, but it
isn't really. This is just the merge of about 10 patches that more or
less rewrite to something functionally equivalent. (well, with some
obvious exeptions.) I found posting all those patches too much for the
list, was I wrong?

> And if it is justified, the code should probably be moved
> to pthread_cond_signal and pthread_cond_broadcast rather than 2 cases
> in one function here. The only reason both cases are covered in one
> function now is that they're naturally trivial special cases of the
> exact same code.

Sure, that could and should perhaps be done, although the fact working
on just one file and being able to have a bunch of helper functions as
static inline was quite convenient.

Jens

-- 
:: INRIA Nancy Grand Est ::: AlGorille ::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536   ::
:: :::::::::::::::::::::: gsm France : +33 651400183   ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::



Download attachment "signature.asc" of type "application/pgp-signature" (199 bytes)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.