Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YZjnREFGhEO9pX6O@elver.google.com>
Date: Sat, 20 Nov 2021 13:17:08 +0100
From: Marco Elver <elver@...gle.com>
To: Kees Cook <keescook@...omium.org>
Cc: Steven Rostedt <rostedt@...dmis.org>,
	Lukas Bulwahn <lukas.bulwahn@...il.com>,
	Alexander Popov <alex.popov@...ux.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Jonathan Corbet <corbet@....net>,
	Paul McKenney <paulmck@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Joerg Roedel <jroedel@...e.de>, Maciej Rozycki <macro@...am.me.uk>,
	Muchun Song <songmuchun@...edance.com>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Robin Murphy <robin.murphy@....com>,
	Randy Dunlap <rdunlap@...radead.org>,
	Lu Baolu <baolu.lu@...ux.intel.com>, Petr Mladek <pmladek@...e.com>,
	Luis Chamberlain <mcgrof@...nel.org>, Wei Liu <wl@....org>,
	John Ogness <john.ogness@...utronix.de>,
	Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
	Alexey Kardashevskiy <aik@...abs.ru>,
	Christophe Leroy <christophe.leroy@...roup.eu>,
	Jann Horn <jannh@...gle.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Mark Rutland <mark.rutland@....com>,
	Andy Lutomirski <luto@...nel.org>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Will Deacon <will@...nel.org>, Ard Biesheuvel <ardb@...nel.org>,
	Laura Abbott <labbott@...nel.org>,
	David S Miller <davem@...emloft.net>,
	Borislav Petkov <bp@...en8.de>, Arnd Bergmann <arnd@...db.de>,
	Andrew Scull <ascull@...gle.com>, Marc Zyngier <maz@...nel.org>,
	Jessica Yu <jeyu@...nel.org>, Iurii Zaikin <yzaikin@...gle.com>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	Wang Qing <wangqing@...o.com>, Mel Gorman <mgorman@...e.de>,
	Mauro Carvalho Chehab <mchehab+huawei@...nel.org>,
	Andrew Klychkov <andrew.a.klychkov@...il.com>,
	Mathieu Chouquet-Stringer <me@...hieu.digital>,
	Daniel Borkmann <daniel@...earbox.net>,
	Stephen Kitt <steve@....org>, Stephen Boyd <sboyd@...nel.org>,
	Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
	Mike Rapoport <rppt@...nel.org>,
	Bjorn Andersson <bjorn.andersson@...aro.org>,
	Kernel Hardening <kernel-hardening@...ts.openwall.com>,
	linux-hardening@...r.kernel.org,
	"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
	linux-arch <linux-arch@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>, notify@...nel.org,
	main@...ts.elisa.tech, safety-architecture@...ts.elisa.tech,
	devel@...ts.elisa.tech, Shuah Khan <shuah@...nel.org>
Subject: Re: [PATCH v2 0/2] Introduce the pkill_on_warn parameter

On Mon, Nov 15, 2021 at 02:06PM -0800, Kees Cook wrote:
[...]
> However, that's a lot to implement when Marco's tracing suggestion might
> be sufficient and policy could be entirely implemented in userspace. It
> could be as simple as this (totally untested):
[...]
> 
> Marco, is this the full version of monitoring this from the userspace
> side?

Sorry I completely missed this email (I somehow wasn't Cc'd... I just
saw it by chance re-reading this thread).

I've sent a patch to add WARN:

	https://lkml.kernel.org/r/20211115085630.1756817-1-elver@google.com

Not sure how useful BUG is, but I have no objection to it also being
traced if you think it's useful.

(I added it to kernel/panic.c, because lib/bug.c requires
CONFIG_GENERIC_BUG.)

> 	perf record -e error_report:error_report_end

I think userspace would want something other than perf tool to handle it
of course.  There are several options:

	1. Open trace pipe to be notified (/sys/kernel/tracing/trace_pipe).
	   This already includes the pid.

	2. As you suggest, use perf events globally (but the handling
	   would be done by some system process).

	3. As of 5.13 there's actually a new perf feature to
	   synchronously SIGTRAP the exact task where an event occurred
	   (see perf_event_attr::sigtrap). This would very closely mimic
	   pkill_on_warn (because the SIGTRAP is synchronous), but lets the
	   process being SIGTRAP'd decide what to do. Not sure how to
	   deploy this though, because a) only root user can create this
	   perf event (because exclude_kernel=0), and b) sigtrap perf
	   events deliberately won't propagate beyond an exec
	   (must remove_on_exec=1 if sigtrap=1) because who knows if
	   the exec'd process has the right SIGTRAP handler.

I think #3 is hard to deploy right, but below is an example program I
played with.

Thanks,
-- Marco

------ >8 ------

#define _GNU_SOURCE
#include <assert.h>
#include <stdio.h>
#include <linux/perf_event.h>
#include <signal.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <sys/syscall.h>
#include <unistd.h>

static void sigtrap_handler(int signum, siginfo_t *info, void *ucontext)
{
	// FIXME: check event is error_report_end
	printf("Kernel error in this task!\n");
}
static void generate_warning(void)
{
	... do something to generate a warning ...
}
int main()
{
	struct perf_event_attr attr = {
		.type		= PERF_TYPE_TRACEPOINT,
		.size		= sizeof(attr),
		.config		= 189, // FIXME: error_report_end
		.sample_period	= 1,
		.inherit	= 1, /* Children inherit events ... */
		.remove_on_exec = 1, /* Required by sigtrap. */
		.sigtrap	= 1, /* Request synchronous SIGTRAP on event. */
		.sig_data	= 189, /* FIXME: use to identify error_report_end */
	};
	struct sigaction action = {};
	struct sigaction oldact;
	int fd;
	action.sa_flags = SA_SIGINFO | SA_NODEFER;
	action.sa_sigaction = sigtrap_handler;
	sigemptyset(&action.sa_mask);
	assert(sigaction(SIGTRAP, &action, &oldact) == 0);
	fd = syscall(__NR_perf_event_open, &attr, 0, -1, -1, PERF_FLAG_FD_CLOEXEC);
	assert(fd != -1);
	sleep(5); /* Try to generate a warning from elsewhere, nothing will be printed. */
	generate_warning(); /* Warning from this process. */
	sigaction(SIGTRAP, &oldact, NULL);
	close(fd);
	return 0;
}

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.