Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130711033754.GL29800@brightrain.aerifal.cx>
Date: Wed, 10 Jul 2013 23:37:55 -0400
From: Rich Felker <dalias@...ifal.cx>
To: musl@...ts.openwall.com
Cc: Andre Renaud <andre@...ewatersys.com>
Subject: Re: Thinking about release

On Thu, Jul 11, 2013 at 10:44:16AM +1200, Andre Renaud wrote:
> > This results in 95MB/s on my platform (up from 65MB/s for the existing
> > memcpy.c, and down from 105MB/s with the asm optimised version). It is
> > essentially identically readable to the existing memcpy.c. I'm not
> > really famiilar with any other cpu architectures, so I'm not sure if
> > this would improve, or hurt, performance on other platforms.
> 
> Reviewing the assembler that is produced, it appears that GCC will
> never generate an ldm/stm instruction (load/store multiple) that reads
> into more than 4 registers, where as the optimised assembler does them
> that read 8 (ie: 8 * 32bit reads in a single instruction). I've tried

For the asm, could we make it more than 8? 10 seems easy, 12 seems
doubtful. I don't see a fundamental reason it needs to be a power of
two, unless the cache line alignment really helps and isn't just
cargo-culting. (This is something I'd still like to know about the
asm: whether it's doing unnecessary stuff that does not help
performance.)

> various tricks/optimisations with the C code, and can't convince GCC
> to do more than 4. I assume that this is probably where the remaining
> 10MB/s is between these two variants.

Yes, I suspect so. One slightly crazy idea I had was to write the
function in C with just inline asm for the inner ldm/stm loop. The
build system does not yet have support for .c files in the arch dirs
instead of .s files, but it could be added.

> Rich - do you have any comments on whether either the C or assembler
> variants of memcpy might be suitable for inclusion in musl?

I would say either might be, but it looks like if we want competitive
performance, some asm will be needed (either inline or full). My
leaning would be to go for something simpler than the asm you've been
experimenting with, but with same or better performance, if this is
possible. I realize the code is not that big as-is, in terms of binary
size, but it's big from an "understanding it" perspective and I don't
like big asm blobs that are hard for somebody to look at and say "oh
yeah, this is clearly right".

Anyway, the big questions I'd still like to get answered before moving
forward is whether the cache line alignment has any benefit.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.