Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jLYkvaFJ=m5j=r5UVajhKY5sbavJ9kJLnt-e=ObwzWshA@mail.gmail.com>
Date: Thu, 17 Dec 2015 12:56:54 -0800
From: Kees Cook <keescook@...omium.org>
To: "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>
Cc: David Windsor <dave@...gbits.org>
Subject: Re: [RFC PATCH v2 02/12] percpu_ref: decrease
 per-CPU refcount bias

On Thu, Dec 17, 2015 at 6:57 AM, David Windsor <dave@...gbits.org> wrote:
> This change is necessary to prevent overflows from occuring in
> percpu_switch_to_atomic_rcu during init on x86.

I'm curious to find out why this overflows under "normal" operation.
Is there any downside to this change?

-Kees

>
> Signed-off-by: David Windsor <dave@...gbits.org>
> ---
>  lib/percpu-refcount.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
> index 6111bcb..02e816b 100644
> --- a/lib/percpu-refcount.c
> +++ b/lib/percpu-refcount.c
> @@ -31,7 +31,7 @@
>   * atomic_long_t can't hit 0 before we've added up all the percpu refs.
>   */
>
> -#define PERCPU_COUNT_BIAS      (1LU << (BITS_PER_LONG - 1))
> +#define PERCPU_COUNT_BIAS      (1LU << (BITS_PER_LONG - 2))
>
>  static DECLARE_WAIT_QUEUE_HEAD(percpu_ref_switch_waitq);
>
> --
> 2.5.0
>
>



-- 
Kees Cook
Chrome OS & Brillo Security

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.