Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201029142823.GN534@brightrain.aerifal.cx>
Date: Thu, 29 Oct 2020 10:28:23 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: More thoughts on wrapping signal handling

On Thu, Oct 29, 2020 at 05:18:20PM +0300, Alexander Monakov wrote:
> On Thu, 29 Oct 2020, Florian Weimer wrote:
> 
> > * Alexander Monakov:
> > 
> > > On Thu, 29 Oct 2020, Alexander Monakov wrote:
> > >
> > >> On Thu, 29 Oct 2020, Rich Felker wrote:
> > >> 
> > >> > Yes, I kinda hand-waved over this with the word "call", which I
> > >> > thought about annotating with (*). In the case of SA_ONSTACK you need
> > >> > a primitive to "call on new stack", and while the ucontext is mostly
> > >> > not meaningful/inspectable to the signal handler (because it's
> > >> > interrupting libc code), the saved signal mask is. You can have the
> > >> > caller restore it (in place of SYS_[rt_]sigreturn), but the natural
> > >> > common solution to all of these needs is having a sort of makecontext.
> > >> 
> > >> Alternatively you could re-raise the signal to have the kernel re-deliver
> > >> it with the correctly regenerated ucontext (and on the right stack)?
> > >> Would that be undesirable for some reason?
> > >
> > > Ah, because there's no way to propagate siginfo struct. Sorry :)
> > 
> > Yes, and that's why I think copying it into TLS space will not work,
> > either.
> 
> Actually I regret rushing that email: clearly as we are talking about wrapped
> signal handlers, re-raising would call the wrapper, which would be perfectly
> capable of substituting saved siginfo.
> 
> So I don't think propagating siginfo is more complicated with this re-raising
> approach.

The reraising is problematic because of how it works with signal
queueing and additional pending signals, I think. You might be able to
make that transparent but I think it's at least slightly nontrivial,
even just figuring out what to do to get them handled in the right
order if there's already another signal pending when you want to
re-raise. If you just unblock and try to handle both from the same
kernel-invoked signal handler, you'll miss the second if the first one
doesn't return normally. And if you try to re-raise to get the second
one, you just push back the issue again, possibly arbitrarily many
times. Maybe this works but it seems messy...

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.