Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190306155421.GR23599@brightrain.aerifal.cx>
Date: Wed, 6 Mar 2019 10:54:21 -0500
From: Rich Felker <dalias@...c.org>
To: Florian Weimer <fweimer@...hat.com>
Cc: musl@...ts.openwall.com
Subject: Re: sigaltstack for implementation-internal signals?

On Wed, Mar 06, 2019 at 01:56:02PM +0100, Florian Weimer wrote:
> * Rich Felker:
> 
> > If we add unblockable, implementation-internal signals that are
> > flagged SA_ONSTACK, however, this breaks; now even if an application
> > has taken the "proper precautions", they can be delivered in a state
> > where the alt stack is nonempty but the stack pointer doesn't point
> > into it, thereby causing it to get clobbered.
> 
> There's also the matter of applications which do not use signal handlers
> at all (and thus never invoke sigaltstack) and have really small stacks,
> or use the stack pointer register for something else.  Is either of that
> supported?

In practice, mostly. Applications doing that need to be aware that
there are a few operations that may use signals internally, and must
avoid those operations. Also, on most nommu platforms, there's a
requirement that the stack always be valid, as interrupts are
delivered on the same stack they interrupted, so code that could
potentially be used on such platforms clearly can't do this.

Formally, I'd say no, it shouldn't be supported, but I'm open to
reasons why it should.

> I think it is not clear whether a libc implementation may generate
> sporadic signals *at all* to support implementation needs.

It certainly can if it defines as part of its ABI a contract that the
stack pointer must always be valid and have at least X available
space. Whether this is a good idea or compatible with existing
informal expectations is the question.

> Does musl use asynchronous implementation signals?  For glibc, we would
> prefer synchronous delivery, that is, the handler runs before the
> signal-generating system call returns.

The __synccall mechanism, used for multithreaded set*id, setrlimit,
etc. is fully synchronous, in multiple steps, with only one thread
doing anything at a time.

If the same mechanism needs to be used as a fallback implementation of
SYS_membarrier, it's less synchronized, so as not to have such awful
performance. All the threads can run concurrently, but the caller
waits for them all to post a semaphore before moving on.

For pthread_cancel, it's asynchronous. The signal can be blocked if
the cancellation signal handler has already run and determined that it
should not act on cancellation (disabled or not at a cancellation
point); in that case, it re-raises the signal and leaves it pending,
so that, if the determination was made while the thread was executing
a signal handler, it can re-evaluate when the signal handler returns
(possibly into restarting a blocking syscall that's a cancellation
point). I don't see any easy way to make this synchronous with respect
to the thread calling pthread_cancel, but maybe there is by breaking
it down into cases and using more than one signal.

> This makes me wonder if we
> should try to get the kernel to provide us a system call which allows us
> to run code on a different thread, with signals disabled, but with the
> caller's stack (from the original thread).  I think this would address
> issues caused by strange stack pointer values in the target thread.

Yes, this would be a very nice feature. Even nicer would be new set*id
and setrlimit syscalls that honor POSIX semantics and affect the whole
process, so that none of this hackery is necessary.

> The Boehm-Demers-Weiser garbage collector would probably benefit from
> that as well.

Yes, I think so.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.