|
Message-ID: <alpine.DEB.2.21.1803281020390.3247@nanos.tec.linutronix.de> Date: Wed, 28 Mar 2018 10:22:34 +0200 (CEST) From: Thomas Gleixner <tglx@...utronix.de> To: Kees Cook <keescook@...omium.org> cc: Andrew Morton <akpm@...ux-foundation.org>, LKML <linux-kernel@...r.kernel.org>, Segher Boessenkool <segher@...nel.crashing.org>, Kernel Hardening <kernel-hardening@...ts.openwall.com> Subject: Re: [PATCH v2] rslib: Remove VLAs by setting upper bound on nroots On Tue, 27 Mar 2018, Kees Cook wrote: > On Tue, Mar 27, 2018 at 4:45 PM, Andrew Morton > <akpm@...ux-foundation.org> wrote: > > On Mon, 26 Mar 2018 16:17:57 -0700 Kees Cook <keescook@...omium.org> wrote: > > > >> On Fri, Mar 16, 2018 at 11:25 PM, Kees Cook <keescook@...omium.org> wrote: > >> > On Fri, Mar 16, 2018 at 3:59 PM, Andrew Morton > >> > <akpm@...ux-foundation.org> wrote: > >> >> On Thu, 15 Mar 2018 15:59:19 -0700 Kees Cook <keescook@...omium.org> wrote: > >> >> > >> >>> Avoid stack VLAs[1] by always allocating the upper bound of stack space > >> >>> needed. The existing users of rslib appear to max out at 24 roots[2], > >> >>> so use that as the upper bound until we have a reason to change it. > >> >>> > >> >>> Alternative considered: make init_rs() a true caller-instance and > >> >>> pre-allocate the workspaces. This would possibly need locking and > >> >>> a refactoring of the returned structure. > >> >>> > >> >>> Using kmalloc in this path doesn't look great, especially since at > >> >>> least one caller (pstore) is sensitive to allocations during rslib > >> >>> usage (it expects to run it during an Oops, for example). > >> >> > >> >> Oh. > >> >> > >> >> Could we allocate the storage during init_rs(), attach it to `struct > >> >> rs_control'? > >> > > >> > No, because they're modified during decode, and struct rs_control is > >> > shared between users. :( > >> > > >> > Doing those changes is possible, but it requires a rather extensive > >> > analysis of callers, etc. > >> > > >> > Hence, the 24 ultimately. > >> > >> Can this land in -mm, or does this need further discussion? > > > > Grumble. That share-the-rs_control-if-there's-already-a-matching-one > > thing looks like premature optimization to me :( That was done back then in the days when the first NAND chips required Reed solomon error correction and we were still running on rather small devices. > > I guess if we put this storage into the rs_control (rather than on the > > stack) then we'd have to worry about concurrent uses of it. It looks > > like all the other fields are immutable once it's set up so there might > > be such users. In fact, I suspect there are... > > Exactly. :( This is the same conclusion tglx and I came to. I think we can lift that and just let all users set up a new rs_control from scratch. Takes some time to init the tables, but .... Thanks, tglx
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.