Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230704032918.GB4163@brightrain.aerifal.cx>
Date: Mon, 3 Jul 2023 23:29:19 -0400
From: Rich Felker <dalias@...c.org>
To: Hamish Forbes <hamish.forbes@...il.com>
Cc: musl@...ts.openwall.com
Subject: Re: DNS answer buffer is too small

Thanks for sending an email where we can document this.

On Tue, Jul 04, 2023 at 12:17:05PM +1200, Hamish Forbes wrote:
> In lookup_name.c the DNS answer buffer (ABUF_SIZE) is currently
> hardcoded to 768 bytes, increased from 512 after TCP DNS was
> implemented.
> 
> The DNS RFC is, unfortunately, not very explicit about the size of TCP
> responses.
> However it does have this little line:
> 
> > RFC 1035 4.2.2.
> > The message is prefixed with a two byte length field which gives the message length, excluding the two byte length field.
> 
> Which you could interpret as the TCP response length is a 16bit
> unsigned int, so 64K.
> But that's also the raw (potentially compressed) DNS response, the
> uncompressed response could be larger still.

"Compression" is not relevant; there is no "decompressed state" the
message is converted into. Everything is processed in the original DNS
protocol form, with "compression" (backpointers).

> As best I can tell (I am not a C programmer!) glibc is using a scratch
> buffer which grows when the parsing function returns an ERANGE error.
> 
> If that's more complex than you want, maybe a good compromise would be
> allocating 768 (or 512?) for UDP queries and expanding to 64k when a
> query falls back to TCP?

Your report here is missing the motivation for why you might care to
have more than 768 bytes of response, which, as I understand it, is
because of CNAME chains. Otherwise, the buffer size is chosen to hold
the number of answer records the stub resolver is willing to accept,
and there is no problem.

Long CNAME chains are rather hostile and are not guaranteed to be well
supported -- AIUI recursive nameservers already impose their own
limits on the number of redirections in a chain, though I cannot find
any specification in the RFCs for this behavior or suggesting a value
for that limit, so if you can dig up what they actually do, this would
be useful to know. But it seems there are chains longer than what we
currently support out in the wild, which most software does support.
So the next step is nailing down exactly what the "requirement" here
is, so we can figure out what's the most reasonable and least costly
way to do it.

If there's some moderately small limit on the number of redirections
that recursive software supports, it may make sense to just increase
the buffer size to match that. If there really can be very large
chains, this is a mess.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.