Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140804145050.GV1674@brightrain.aerifal.cx>
Date: Mon, 4 Aug 2014 10:50:50 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: C threads, v3.0

On Mon, Aug 04, 2014 at 11:30:03AM +0200, Jens Gustedt wrote:
> I'd like to discuss two issues before going further. The first is of
> minor concern. In many places of that code _m_lock uses a flag, namely
> 0x40000000. I only found places where this bit is read, but not where
> it is set. Did I overlook something or is this a leftover?

It's set by the kernel when a thread dies while owning a robust mutex.

> The second issue concerns tsd. C threads can't be fine tuned with
> attributes, in particular they receive exactly as much stack space as
> we give them. They are supposed to implement an economic thread model
> that uses up as few resources as possible. In the current
> implementation tsd takes up a medium sized part of the stack
> (__pthread_tsd_size) if any of the application or linked libraries use
> pthread_key_create somewhere.

512 bytes on 32-bit archs, and 1k on 64-bit archs, is certainly
nonzero but I wouldn't call it "a medium sized part of the stack" when
the default stack is something like 80k.

> My impression is that pthread_[gs]etspecific is already rarely used
> and that tss_[gs]et will be even less. C threads also introduce
> _Thread_local, a much more convenient interface as long as you don't
> have 
> 
> I don't think that this policy is ideal for C threads, but it could
> perhaps be revised for pthreads, too. My idea is the
> following. (version for C threads with minimal impact on pthreads)
> 
>  - don't assign a tsd in thrd_create
> 
>  - change pthread_getspecific, pthread_setspecific, tss_get, and
>    __pthread_tsd_run_dtors such that they check for nullness of
>    tsd. this is a trivial and non-expensive change.
>    pthread_setspecific may return ENOMEM or EINVAL in such cases. The
>    getters should just return 0. __pthread_tsd_run_dtors obviously
>    would just do nothing, then.

EINVAL is definitely not permitted here since ENOMEM is required by
POSIX for this case, if it happens.

>  - change tss_set, to mmap the table if it is absent and if it is
>    called with a non-null pointer argument. (I.e if it really has to
>    store something). C11 allows this function to fail, so we could
>    return thrd_error if mmap fails.
> 
>  - change thrd_exit to check for tsd and to unmap it at the end if
>    necessary
> 
> For thrd_create one of the consequences would be that main hasn't to
> be treated special with that respect. The additional mmap and unmap in
> tss_set should only concern entire pages. This should only have a
> minimal runtime overhead, but which only would occur for threads that
> call tss_set with a non-null pointer to be stored.
> 
> Please let me know what you think.

This is not an acceptable implementation (at least from the standpoint
of musl's design principles); it sacrifices fail-safe behavior
(guaranteeing non-failure of an interface that's usually impossible to
deal with failure from, and for which there's no fundamental reason it
should be able to fail) to safe a tiny amount of memory.

For static linking, I think it's completely acceptable to assume that,
if tsd functions are linked, they're going to be used. Unfortunately
this isn't possible for dynamic linking, and I had considered an
alternate implementation for dynamic linking, based on having the
first call to pthread_key_create simulate loading of a shared library
containing a TLS array of PTHREAD_KEYS_MAX pointers. This would make
the success/failure detectable early at a time where there's
inherently a possibility of failure (too many keys) rather than where
failure is just a consequence of a bad implementation. But the
complexity of doing this, and the added unnecessary failure case
(failure before PTHREAD_KEYS_MAX is hit, which really shouldn't
happen), seemed unjustified for a 0.5-1k savings.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.