Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120609211137.GZ163@brightrain.aerifal.cx>
Date: Sat, 9 Jun 2012 17:11:37 -0400
From: Rich Felker <dalias@...ifal.cx>
To: musl@...ts.openwall.com
Subject: Re: printf POSIX compliance

On Sat, Jun 09, 2012 at 02:58:55PM +0200, Szabolcs Nagy wrote:
> i don't like the gnulib approach to portability
> 
> 1) it implements broken functions which simply must not
> be used in an application with portability requirements
> (eg alloca is implemented using assumptions about the
> stack mechanism and invokes undefined behaviour)

This is definitely a problem, perhaps the biggest problem with gnulib.
Since most programs that use gnulib almost require GCC anyway, I'm not
sure alloca is the best example, but it surely applies to things like
freadahead. Anyway this sort of junk is at best orthogonal to goal of
writing multi-platform applications.

> 2) instead of sharing code between platforms they use ifdefs
> and depend on internals of the implementation
> (eg. broken gnu stdio functions: gnulib will never work on
> a new platform out of the box, you have to port it whenever
> there is a new libc or even a new version of a libc)

Actually the legacy unices shared a surprising amount of stdio
internals, largely because of broken code like gnulib that poked at
the internals...

> 3) they duplicate a lot of the libc writing effort

Are you sure? I think most of the libc-like code is copied from (and
shared with) glibc...

> and provide bad quality code
> (eg. math functions: a mathematical software will only be
> portable if libm is high quality so if portability matters
> you need a libm written in portable c code (and maybe even
> softfloat emulation)

I haven't looked at their math functions, but if they really are that
broken, it brings up a major point you missed:

The gnulib configure tests are not just extremely pedantic in judging
a system as non-conformant, but _selectively_ pedantic. That is, they
test every single corner case that glibc went to the trouble to get
right, but ignore the (equally many) cases glibc got horrible wrong.
And sometimes they even test for non-conformant, glibc-specific
behavior...

How does this relate to math? Well, it seems they check for all sorts
of obscure breakage in the host libm, but then replace it with their
own broken math code...

> in my opinion the more correct approaches to portability are
> 
> 1) have a high quality libc (like musl) and port it
> to all the required targets (eg by emulating syscalls)

I don't think enforcing implementation homogeneity is a viable
solution for applications.

> 2) write the application with portability in mind: instead
> of using posix api use a 'libportability' api that covers
> enough areas so the application can work and simple enough
> so it can be implemented on any target (so eg. it has no
> alloca and freadahead in it, and only has limited form of
> networking, process or file management etc)

This approach is not just wrong but extremely harmful. It's the
approach of APR, NSPR, and basically any library whose name ends in
"PR" (portable runtime). I've not read the Apache code, but I've read
SVN's code and its use of APR, and the level of idiocy it involves is
astounding. The equivalent of a simple printf call ends up as this
giant chain of allocations and concatenations building strings to send
to the output system. I seriously doubt there's any way a program that
uses APR (Apache itself included) could avoid crashing under OOM
conditions.

Surely there's no fundamental reason it _has_ to be this way. After
all, someone making a portable system interface library could end up
making it just like POSIX. The problem is that the task attracts
people with the wrong traits: particularly, obsessiveness with
designing something absolutely general and no intuition for the actual
necessary usage cases.

More fundamentally (i.e. even if it didn't attract the wrong people)
the approach is wrong for the following two reasons:

1. It penalizes correct systems by putting an added layer of bloat
   between the application and the underlying correct POSIX
   implementation.

2. It makes it impossible to reuse code that was written to C99/POSIX
   in your application without "porting" the code to your new
   portability library, and conversely, impossible to reuse code
   written to your portability in projects that want to be
   light-weight and pure-C-only or POSIX-only (or else forces them to
   pull in an ugly library they don't want and that might have
   portability issues by virtue of not being ported to some systems
   they want/need to support).

Ultimately, gnulib does actually have the right basic approach; they
just botched a lot of the details. The correct approach from an
application side (where you can't just fix the whole broken OS) is to
write your code to standard C/POSIX assuming it all works (possibly
avoiding making unnecessary assumptions for interfaces that are "hard
to fix"), and include patch-up code for a known finite set of broken
systems you want to support. Today, that set would probably be just
"Windows, Android, and OSX/iOS".

This minimizes (to zero) the cost of portability on clean, non-broken
systems and avoids doing anything that could cause your program to
break on such systems. It also keeps the cost reasonably low (much
lower than the *PR solutions) even on broken systems. And it works
well in practice. This is not something I just made up; it's a policy
I pushed during the entire time I was working on MPlayer, which
usually took the form of "Keep the hacks to fix broken stuff as close
to the broken component as possible, and avoid polluting code that has
nothing to do with the broken stuff."

The main differences from gnulib are:

- Don't try to "fix" any system except the known-broken ones.
- Avoid unnecessary fixes whose costs are disproportionately large
  compared to their benefits.
- Don't add all sorts of unnecessary functions that have nothing to do
  with portability.

I don't expect to "revolutionize" gnulib, but since the right (in my
opinion) approach is reasonably similar to gnulib's basic idea, I
think we could influence the evolution of gnulib in a more positive
direction. Some things that might be nice to see...

- Clear distinction in gnulib between "broken libc replacement" stuff
  and "convenience utility" stuff where the latter is implemented
  portably without any #ifdeffery.
- Automatically-generated option to configure that inhibits all gnulib
  tests and replacement functions and forcibly assumes the underlying
  system is C and POSIX conformant.
- And of course cleanup/fixing of existing known bugs and portability
  issues (and thread-safety issues).

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.