Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150830170239.GM7833@brightrain.aerifal.cx>
Date: Sun, 30 Aug 2015 13:02:39 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: [PATCH 1/2] let them spin

On Sun, Aug 30, 2015 at 10:41:53AM +0200, Jens Gustedt wrote:
> > > So the difference isn't dramatic, just one order of magnitude and
> > > everybody gets his chance. These chances are not equal, sure, but
> > > NEVER in capitals is certainly a big word.
> > 
> > Try this: on a machine with at least 3 physical cores, 3 threads
> > hammer on the same lock, counting the number of times they succeed in
> > taking it. Once any one thread has taken it at least 10 million times
> > or so, stop and print the counts. With your spin strategy I would
> > expect to see 2 threads with counts near 10 million and one thread
> > with a count in the hundreds or less, maybe even a single-digit count.
> > With the current behavior (never spinning if there's a waiter) I would
> > expect all 3 counts to be similar.
> 
> The setting that you describe is really a pathological one, where the
> threads don't do any work between taking the lock and releasing it. Do
> I understand that correctly?

If you're using locks to implement fake greater-than-wordsize atomics
then it's the normal case, not a pathological one. You have
(effectively) things like:

	_Atomic long double x;
	__lock(global_lock);
	x++;
	__unlock(global_lock);

For a more realistic example, consider atomic CAS on a linked-list
prev/next pointer pair.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.