Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z9rYxCTJR2ekFJod@voyager>
Date: Wed, 19 Mar 2025 15:46:28 +0100
From: Markus Wichmann <nullplan@....net>
To: musl@...ts.openwall.com
Cc: Alba Mendez <me@...a.sh>
Subject: Re: Making user libraries Y2038 compatible?

Am Tue, Mar 18, 2025 at 11:49:43PM +0100 schrieb Alba Mendez:
> Hi,
>
> First of all, sorry if I'm asking in the wrong list or if this has been
> asked already! User libraries understandably want to stay ABI and API
> compatible with the Y2038 proofness and LFS design, just like libc
> does, meaning:
>
> - on systems where time_t (and other types) are 32-bit by default,
>   offer an additional "<name>64" symbol in addition to "<name>".
> - also on these systems, "#define <name>64 <name>" in the header if
>   _TIME_BITS (or the relevant app-defined macro for the type) is 64.
>
> This allows the library to begin providing 64-bit interfaces without
> breaking ABI, and allows it to link with a user application without
> forcing the application to use the same feature test macros as the
> library.
>
> To do this, I think libraries need at a minimum:
>
> - a C type that refers to the original default for that platform (i.e.
>   if no app-defined macros were present). glibc offers __ prefixed
>   types for this purpose (__time_t, __off_t, __ino_t...)
> - a preprocessor define to check if the a platform has a 32-bit default
>   (glibc offers __TIMESIZE for this purpose; this decides the size of
>   all affected types, time or size related)
>

This is what I like to call the "head through the wall" approach. You
attempt to explicitly opt into the only sensible choice for these types,
which massively overcomplicates the problem across the entire codebase.

The more sensible thing would be to find a way for the C library to
declare a 64-bit time_t and off_t and whatever else, and use that
exclusively. Ideally with glibc you should have been using
_FILE_OFFSET_BITS=64 and _TIME_BITS=64 since their inception, then you
wouldn't have these issues today. Because these features were added back
in 1997, if glibc's git log is to be believed.

So what is there to do? For applications, simply always set the above
two macros on glibc, then you get 64-bit types for time_t and off_t and
all the others. On musl, you don't need to do anything. How to tell?
Configure tests. It is pretty easy to statically test the size of types.
You just test if you get the right definitions at the outset, and if not
you try again with these macros defined.

For libraries, you must require this configuration at the outset. You
can put a static assert into your header files, for example. For
ABI compatibility, you may need to use symbol versioning and compat
definitions. Where the types become relevant ABI surface, you can resort
to the stdint.h types.

> In the absence of these mechanisms libraries resort to implementation-
> specific and often incorrect[1] heuristics, like testing for
> __BITS_PER_LONG==32 to see if the <name>64 symbol is needed (which is
> incorrect in modern 32-bit platforms that were defined after Y2038
> support was in place, like x32 or RV32).
>

_FILE_OFFSET_BITS and _TIME_BITS both *are* implementation-specific
heuristics.

Ciao,
Markus

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.