|
Message-ID: <5181C3B4.4040801@eservices.virginia.edu>
Date: Wed, 1 May 2013 21:39:00 -0400
From: "Z. Gilboa" <zg7s@...rvices.virginia.edu>
To: <musl@...ts.openwall.com>
Subject: Re: sign (in)consistency between architectures
Am 01.05.2013 18:41, schrieb Rich Felker:
> On Wed, May 01, 2013 at 04:00:07PM -0400, Rich Felker wrote:
>> On Wed, May 01, 2013 at 08:00:15PM +0200, Szabolcs Nagy wrote:
>>> * Z. Gilboa <zg7s@...rvices.virginia.edu> [2013-05-01 13:05:03 -0400]:
>>>> The current architecture-specific type definitions
>>>> (arch/*/bits/alltypes.h) seem to entail the following inconsistent
>>>> signed/unsigned types:
>>>>
>>>> type x86_64 i386
>>>> -------------------------------
>>>> uid_t unsigned signed
>>>> gid_t unsigned signed
>>>> dev_t unsigned signed
>>>> clock_t signed unsigned
>>>
>>> i can verify that glibc uses unsigned
>>> uid_t,gid_t,dev_t and signed clock_t
>>>
>>> of course applications should not depend on
>>> the signedness, but if they appear in a c++
>>> api then the difference can cause problems
>>>
>>> and cock_t may be used in arithmetics where
>>> signedness matters
>> uid_t, gid_t, and dev_t we can consider changing; I don't think it
>> matters a whole lot and like you said they affect C++ ABI. clock_t
>> cannot be changed without making the clock() function unusable. See
>> glibc bug #13080 (WONTFIX):
>>
>> http://sourceware.org/bugzilla/show_bug.cgi?id=13080
> I just posted a followup on this bug: from what I can tell, it's
> questionable whether having the return value of clock() wrap is
> conforming even if clock_t is an unsigned type, and definitely
> non-conforming if it's a signed type. As such, I see three possible
> solutions:
>
> 1. Leave things along and do it the way musl does it now, where
> subtracting (unsigned) results works. We should probably add a check
> to see if the return value would be equal to (clock_t)-1, and if so,
> either add or subtract 1, so that the caller does not interpret the
> return value as an error.
>
> 2. Change clock_t to a signed type, and have clock() check for
> overflow and permanently return -1 once the process has used more than
> 2147 seconds of cpu time. This seems undesirable to applications.
>
> 3. Change clock_t to long long on 32-bit targets. This would be
> formally incompatible with the the glibc/LSB ABI, but in practice the
> worst that would happen is that the register containing the upper bits
> would get ignored.
>
> Any opinions on the issue?
>
> Rich
I consider the difference in sign to be of much greater significance,
and therefore would prefer option #3. Besides, with enough patience and
perseverance (/der lange Marsch durch die Institutionen.../), this might
actually become the glibc solution as well:)
Content of type "text/html" skipped
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.