Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160915003451.GC15995@brightrain.aerifal.cx>
Date: Wed, 14 Sep 2016 20:34:51 -0400
From: Rich Felker <dalias@...c.org>
To: Rob Landley <rob@...dley.net>
Cc: "j-core@...ore.org" <j-core@...ore.org>, musl@...ts.openwall.com
Subject: Re: [J-core] Aligned copies and cacheline conflicts?

On Tue, Sep 13, 2016 at 07:21:58PM -0500, Rob Landley wrote:
> There was a discussion at one point about how reading from and writing
> to an alised cache line (anything at an 8k offset) would cause horrible
> performance (the write cacheline evicting the read cacheline and vice
> versa), how this was a more common problem than 1 in 256 because things
> are often page aligned, and how a workaround was to have memcpy read
> data into 8 registers and then write it out again from those 8 registers
> to avoid the ping-ping.
> 
> Question: did that memcpy change actually go into musl and the kernel?
> (Seems like both would need it...) If so, what do I have to make sure
> I've pulled to get them?

This has not gone upstream yet mainly because:

1) I'm not sure if it's a good idea for other archs that use the
generic C memcpy.

2) It would be a lot of extra code to handle the misaligned-mod-4
cases this way as well, and unlikely to help much anyway since this
case doesn't arise from page alignment, so it's not clear if I should
do this case too.

I could put a fork of memcpy.c in sh/memcpy.c and work on it there and
only merge it back to the shared one if others test it on other archs
and find it beneficial (or at least not harmful).

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.