Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57F52B03.8070300@intel.com>
Date: Wed, 5 Oct 2016 09:32:03 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: kernel-hardening@...ts.openwall.com
Cc: keescook@...omium.org, Elena Reshetova <elena.reshetova@...el.com>,
 Hans Liljestrand <ishkamiel@...il.com>, David Windsor <dwindsor@...il.com>
Subject: Re: [RFC PATCH 12/13] x86: x86 implementation for
 HARDENED_ATOMIC

On 10/05/2016 09:18 AM, Jann Horn wrote:
> 1. Pipeline flushes because of branch prediction failures caused by
>    more-or-less random cmpxchg retries? Pipeline flushes are pretty
>    expensive, right?
> 2. Repeated back-and-forth bouncing of the cacheline because an increment
>    via cmpxchg needs at least two accesses instead of one, and the
>    cacheline could be "stolen" by the other thread between the READ_ONCE
>    and the cmpxchg.
> 3. Simply the cost of retrying if the value has changed in the meantime.
> 4. Maybe if two CPUs try increments at the same time, with exactly the
>    same timing, they get stuck in a tiny livelock where every cmpxchg
>    fails because the value was just updated by the other core? And then
>    something slightly disturbs the timing (interrupt / clock speed
>    change / ...), allowing one task to win the race?

I can speculate about it, but I don't know for sure.  The topdown tool
from pmu-tools is usually a great way to figure out what's causing these
kinds of bottlenecks in the CPU:

	https://github.com/andikleen/pmu-tools

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.