Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161028095301.GB5806@leverpostej>
Date: Fri, 28 Oct 2016 10:53:01 +0100
From: Mark Rutland <mark.rutland@....com>
To: kernel-hardening@...ts.openwall.com
Cc: Vegard Nossum <vegard.nossum@...il.com>,
	Peter Zijlstra <peterz@...radead.org>, Pavel Machek <pavel@....cz>,
	Kees Cook <keescook@...omium.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	kernel list <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...hat.com>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Subject: Re: Re: rowhammer protection [was Re: Getting
 interrupt every million cache misses]

On Fri, Oct 28, 2016 at 11:35:47AM +0200, Ingo Molnar wrote:
> 
> * Vegard Nossum <vegard.nossum@...il.com> wrote:
> 
> > Would it make sense to sample the counter on context switch, do some
> > accounting on a per-task cache miss counter, and slow down just the
> > single task(s) with a too high cache miss rate? That way there's no
> > global slowdown (which I assume would be the case here). The task's
> > slice of CPU would have to be taken into account because otherwise you
> > could have multiple cooperating tasks that each escape the limit but
> > taken together go above it.
> 
> Attackers could work this around by splitting the rowhammer workload between 
> multiple threads/processes.

With the proposed approach, they could split across multiple CPUs
instead, no?

... or was that covered in a prior thread?

Thanks,
Mark.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.