Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140904155546.GS12888@brightrain.aerifal.cx>
Date: Thu, 4 Sep 2014 11:55:46 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: libhybris and musl?

On Thu, Sep 04, 2014 at 08:17:09AM -0700, Isaac Dunham wrote:
> On Wed, Sep 03, 2014 at 06:59:17PM -0400, Rich Felker wrote:
> > Basically, my view, as expressed many times on #musl, is that all of
> > the existing GL drivers, but especially the non-free ones, are full of
> > way too much bad code to be safe to load into your program's address
> > space. Any process that's loaded them should be treated as potentially
> > crashing or aborting at any time, and possibly also has serious
> > namespace pollution from random libs getting pulled in.
> > 
> > The way I'd like to see this solved for our "new platform vision" is
> > to move the actual GL implementation out of the address space of the
> > application using it, and instead provide a universal libGL for
> > applications to link (even statically, if desired) that marshals all
> > GL operations over shared-memory-based IPC to a separate process which
> > has loaded the actual driver for the target hardware you want to
> > render to. As long as the IPC tools used don't depend on a particular
> > libc's ABI at all, this should make it trivial to solve the problem
> > libhybris aimed to solve at the same time: you simply use Bionic in
> > the GL driver process, and your preferred libc with the application
> > side libGL.
> 
> I saw an implementation of GL based on this design or something very
> similar recently.
> The point the developer had was to make a GL that could be statically
> linked and handle remote rendering.
> 
> Ah yes, there it is:
> https://github.com/msharov/gleri
> "Network protocol, service, and API for using OpenGL remotely."

While interesting, it could potentially require a lot of work to adapt
this to something practical. My intent is for the performance to be as
close as possible to current performance with the buggy, insecure
design people are using, because the closer it is, the better chance
it has of displacing the utterly idiotic system people are using now.

For example, IPC via shared memory should be as the primary mechanism
(rather than sockets) for all large or low-latency transfers, and I
also want to be able to pass the fd used for mapping GPU buffers
across a socket to the application to allow it to directly map this
buffer, assuming my information is correct that nothing there needs to
be validated (my understanding is that it contains data to be
processed by shaders on the GPU which run without privileges to do any
harm). Of course compiling shaders should take place on the driver
process side, so that applications cannot bypass the shader compiler
and submit their own potentially malicious compiled code which would
be difficult to validate.

These issues were discussed a lot more on IRC. I admit freely to not
being an expert on current graphics technology, so I may have
misconceptions on some details. But independent of this, it's obvious
that the current architecture of loading drivers into applications is
an utter disaster from a security and robustness standpoint. My hope
is that it can be fixed at a cost that's not noticable to most users,
but it really needs to be fixed at any cost.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.