Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7a1c4974e8fbc3b82ead0bfb18224d5b.squirrel@webmail.greenhost.nl>
Date: Sat, 17 Mar 2012 21:14:55 +1100
From: "Indan Zupancic" <indan@....nu>
To: "Eric Dumazet" <eric.dumazet@...il.com>
Cc: "Will Drewry" <wad@...omium.org>,
 linux-kernel@...r.kernel.org,
 linux-arch@...r.kernel.org,
 linux-doc@...r.kernel.org,
 kernel-hardening@...ts.openwall.com,
 netdev@...r.kernel.org,
 x86@...nel.org,
 arnd@...db.de,
 davem@...emloft.net,
 hpa@...or.com,
 mingo@...hat.com,
 oleg@...hat.com,
 peterz@...radead.org,
 rdunlap@...otime.net,
 mcgrathr@...omium.org,
 tglx@...utronix.de,
 luto@....edu,
 eparis@...hat.com,
 serge.hallyn@...onical.com,
 djm@...drot.org,
 scarybeasts@...il.com,
 pmoore@...hat.com,
 akpm@...ux-foundation.org,
 corbet@....net,
 markus@...omium.org,
 coreyb@...ux.vnet.ibm.com,
 keescook@...omium.org
Subject: Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W

On Wed, March 14, 2012 19:05, Eric Dumazet wrote:
> Le mercredi 14 mars 2012 à 08:59 +0100, Indan Zupancic a écrit :
>
>> The only remaining question is, is it worth the extra code to release
>> up to 32kB of unused memory? It seems a waste to not free it, but if
>> people think it's not worth it then let's just leave it around.
>
> Quite frankly its not an issue, given JIT BPF is not yet default
> enabled.

And what if assuming JIT BPF would be default enabled?

> I am not sure all bugs were found and fixed. I would warn users before
> considering using it in production.
>
> If you have time, I would appreciate if you could double check and find
> last bugs in it.

I'll do my best.

The current JIT doesn't handle negative offsets: The stuff that's handled
by __load_pointer(). Easiest solution would be to make it non-static and
call it instead of doing bpf_error. I guess __load_pointer was added later
and the JIT code didn't get updated.

But gcc refuses to inline load_pointer, instead it inlines __load_pointer
and does the important checks first. Considering the current assembly code
does a call too, it could as well call load_pointer() directly. That would
save a lot of assembly code, handle all negative cases too and be pretty
much the same speed. The only question is if this slow down some other
archs than x86. What do you think?

The EMIT_COND_JMP(f_op, f_offset); should be in an else case, otherwise
it's superfluous. It's a harmless bug though. I haven't spotted anything
else yet.

You can get rid of all the "if (is_imm8(offsetof(struct sk_buff, len)))"
code by making sure everything is near: Somewhere at the start, just
add 127 to %rdi and a BUILD_BUG_ON(sizeof(struct sk_buff) > 255).

I'll write some more patches tomorrow.

Greetings,

Indan

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..7e0f575 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -581,8 +581,9 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
 					if (filter[i].jf)
 						EMIT_JMP(f_offset);
 					break;
+				} else {
+					EMIT_COND_JMP(f_op, f_offset);
 				}
-				EMIT_COND_JMP(f_op, f_offset);
 				break;
 			default:
 				/* hmm, too complex filter, give up with jit compiler */



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.