Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240308025204.GK4163@brightrain.aerifal.cx>
Date: Thu, 7 Mar 2024 21:52:04 -0500
From: Rich Felker <dalias@...c.org>
To: David Schinazi <dschinazi.ietf@...il.com>
Cc: musl@...ts.openwall.com
Subject: Re: mDNS in musl

On Thu, Mar 07, 2024 at 05:30:06PM -0800, David Schinazi wrote:
> On Thu, Mar 7, 2024 at 4:08 PM Rich Felker <dalias@...c.org> wrote:
> 
> > On Thu, Mar 07, 2024 at 02:50:53PM -0800, David Schinazi wrote:
> > > On Wed, Mar 6, 2024 at 6:42 PM Rich Felker <dalias@...c.org> wrote:
> > >
> > > > On Wed, Mar 06, 2024 at 04:17:44PM -0800, David Schinazi wrote:
> > > > > As Jeffrey points out, when the IETF decided to standardize mDNS,
> > they
> > > > > published it (RFC 6762) at the same time as the Special-Use Domain
> > > > Registry
> > > > > (RFC 6761) which created a process for reserving domain names for
> > custom
> > > > > purposes, and ".local" was one of the initial entries into that
> > registry.
> > > > > The UTF-8 vs punycode issue when it comes to mDNS and DNS is
> > somewhat of
> > > > a
> > > > > mess. It was discussed in Section 16 of RFC 6762 but at the end of
> > the
> > > > day
> > > > > punycode won. Even Apple's implementation of getaddrinfo will perform
> > > > > punycode conversion for .local instead of sending the UTF-8. So in
> > > > practice
> > > > > you wouldn't need to special-case anything here.
> > > >
> > > > OK, these are both really good news!
> > > >
> > > > > There's also very much a policy matter of what "locally over
> > > > > > multicast" means (what the user wants it to mean). Which interfaces
> > > > > > should be queried? Wired and wireless ethernet? VPN links or other
> > > > > > sorts of tunnels? Just one local interface (which one to
> > prioritize)
> > > > > > or all of them? Only if the network is "trusted"? Etc.
> > > > > >
> > > > >
> > > > > You're absolutely right. Most mDNS systems try all non-loopback
> > non-p2p
> > > > > multicast-supporting interfaces, but sending to the default route
> > > > interface
> > > > > would be a good start, more on that below.
> > > >
> > > > This is really one thing that suggests a need for configurability
> > > > outside of what libc might be able to offer. With normal DNS lookups,
> > > > they're something you can block off and prevent from going to the
> > > > network at all by policy (and in fact they don't go past the loopback
> > > > by default, in the absence of a resolv.conf file). Adding mDNS that's
> > > > on-by-default and not configurable would make a vector for network
> > > > traffic being generated that's probably not expected and that could be
> > > > a privacy leak.
> > > >
> > >
> > > Totally agree. I was thinking through this both in terms of RFCs and in
> > > terms of minimal code changes, and had a potential idea. Conceptually,
> > > sending DNS to localhost is musl's IPC mechanism to a more feature-rich
> > > resolver running in user-space. So when that's happening, we don't want
> > to
> > > mess with it because that could cause a privacy leak. Conversely, when
> > > there's a non-loopback IP configured in resolv.conf, then musl acts as a
> > > DNS stub resolver and the server in resolv.conf acts as a DNS recursive
> > > resolver. In that scenario, sending the .local query over DNS to that
> > other
> > > host violates the RFCs. This allows us to treat the configured resolver
> > > address as an implicit configuration mechanism that allows us to
> > > selectively enable this without impacting anyone doing their own DNS
> > > locally.
> >
> > This sounds like an odd overloading of one thing to have a very
> > different meaning, and would break builtin mDNS for anyone doing
> > DNSSEC right (which requires validating nameserver on localhost).
> > Inventing a knob that's an overload of an existing knob is still
> > inventing a knob, just worse.
> >
> 
> Sorry, I was suggesting the other way around: to only enable the mDNS mode
> if resolver != 127.0.0.1.

I understood that. It's still overloading a knob.

> But on the topic of DNSSEC, that doesn't really
> make sense in the context of mDNS because the names aren't globally unique
> and signed. In theory you could exchange DNSSEC keys out of band and use
> DNSSEC with mDNS, but I've never heard of anyone doing that. At that point
> people exchange TLS certificates out of band and use mTLS. But overall I
> can't argue that overloading configs to mean multiple things is janky :-)

I'm not talking about validating DNSSEC for .local (which seems
nonsensical, but I guess is actually something you could do with your
own trust anchor for .local that would prevent rogue devices on your
network from being able to answer mDNS).

I'm talking about how anyone properly validating DNSSEC for the global
DNS space has to have a nameserver on localhost, and how this would
prevent them from using mDNS (or, in effect, disincentivize setting up
DNSSEC validation since it would break your mDNS) via the builtin libc
stub resolver support and would require wiring up the validating
nameserver to it.

> > > > When you do that, how do you control which interface(s) it goes over?
> > > > > > I think that's an important missing ingredient.
> > > > >
> > > > > You're absolutely right. In IPv4, sending to a link-local multicast
> > > > address
> > > > > like this will send it over the IPv4 default route interface. In
> > IPv6,
> > > > the
> > > > > interface needs to be specified in the scope_id. So we'd need to pull
> > > > that
> > > > > out of the kernel with rtnetlink.
> > > >
> > > > There's already code to enumerate interfaces, but it's a decent bit of
> > > > additional machinery to pull in as a dep for the stub resolver,
> > >
> > >
> > > Yeah we'd need lookup_name.c to include netlink.h - it's not huge though,
> > > netlink.c is 50 lines long and statically linked anyway right?
> >
> > I was thinking in terms of using if_nameindex or something, but indeed
> > that's not desirable because it's allocating. So it looks like it
> > wouldn't share code but use netlink.c directly if it were done this
> > way.
> >
> > BTW if there's a legacy ioctl that tells you the number of interfaces
> > (scope_ids), it sems like you could just iterate over the whole
> > numeric range without actually doing netlink enumeration.
> 
> That would also work. The main limitation I was working around was that you
> can only pass around MAXNS (3) name servers around without making more
> changes.

Yes, there's no reason to assume the first 3 interfaces found would
suffice though. I think the way to handle this would be to treat the
multicast address as just one nameserver in the list, and have
__res_msend do whatever sort of iteration it needs to do when the
resolvconf structure indicates that it's for mDNS.

> > > > it's not clear how to do it properly for IPv4 (do scope ids work with
> > > > v4-mapped addresses by any chance?)
> > > >
> > >
> > > Scope IDs unfortunately don't work for IPv4. There's the SO_BINDTODEVICE
> > > socket option, but that requires elevated privileges. For IPv4 I'd just
> > use
> > > the default route interface.
> >
> > But the default route interface is almost surely *not* the LAN where
> > you expect .local things to live except in the case where there is
> > only one interface. If you have a network that's segmented into
> > separate LAN and outgoing interfaces, the LAN, not the route to the
> > public internet, is where you would want mDNS going.
> 
> In the case of a router, definitely. In the case of most end hosts or VMs
> though, they often have only one or two routable interfaces, and the
> default route is also the LAN.

Not necessarily a router; could just be a client device with multiple
routes. For example a default route that goes over a VPN and a LAN
route for accessing local machines.

> > With that said, SO_BINDTODEVICE is not the standard way to do this,
> > and the correct/standard way doesn't need root. What it does need is
> > binding to the local address on each device, which is still rather
> > undesirable because it means you need N sockets for N interfaces,
> > rather than one socket that can send/receive all addresses.
> 
> Oh you're absolutely right, I knew there was a non-privileged way to do
> this but couldn't remember it earlier.
> 
> This is giving me an idea though: we could use the "connect UDP socket to
> get a route lookup" trick. Let's say we're configured with a nameserver
> that's not 127.0.0.1 (which is the case where I'd like to enable this)
> let's say the nameserver is set to 192.0.2.33, then today foobar.local
> would be sent to 192.0.2.33 over whichever interface has a route to it (in
> most cases the default interface, but not always). We could open an
> AF_INET/SOCK_DGRAM socket, connect it to 192.0.2.33:53, and then
> use getsockname to get the local address - we then close that socket. We
> can then create a new socket, bind it to that local address. That would
> ensure that we send the mDNS traffic on the same interface where we would
> have sent the unicast query. Downside is that since all queries share the
> same socket, we'd bind everything to the interface of the first resolver,
> or need multiple sockets.

This again sounds like a bad "overloaded knob". Unless you're letting
DHCP overwrite your resolv.conf, there's no reason for the route to
the nameserver to match the LAN you want mDNS on.

> > the amount of added code would be quite small. Limiting things to the
> > > default interface isn't a full multi-network solution, but for those I
> > > think it makes more sense to recommend running your own resolver on
> > > loopback (you'd need elevated privileges to make this work fully anyway).
> > > Coding wise, I think this would be pretty robust. The only breakage I
> > > foresee is cases where someone built a custom resolver that runs on a
> > > different machine and somehow handles .local differently than what the
> > RFCs
> > > say. That config sounds like a bad idea, and a violation of the RFCs, but
> > > that doesn't mean there isn't someone somewhere who's doing it. So
> > there's
> > > a non-zero risk there. But to me that's manageable risk.
> > >
> > > What do you think?
> >
> > I think a more reasonable approach might be requiring an explicit knob
> > to enable mDNS, in the form of an options field like ndots, timeout,
> > retries, etc. in resolv.conf. This ensures that it doesn't become
> > attack surface/change-of-behavior in network environments where peers
> > are not supposed to be able to define network names.
> >
> 
> That would work. I'm not sure who maintains the list of options though.
> From a quick search it looks like they came out of 4.3BSD like many
> networking features, but it's unclear if POSIX owns it or just no one does
> (which would be the same, POSIX is not around as a standard body any more).

No one does. Basically anyone can freely add extensions, with the
caveat that if you have mutually incompatible extensions in
/etc/resolv.conf, someone using the same /etc with more than one stub
resolver implementation (like both musl and glibc) is going to have a
bad time.

If for example glibc were up for adding configuration for which
interfaces to do mdns on here, we could coordinate. But I suspect they
would put it in a config file specific to the nss module that does
mdns.

> One further advantage of such an approach is that it could also solve
> > the "which interface(s)" problem by letting the answer just be
> > "whichever one(s) the user configured" (with the default list being
> > empty). That way we wouldn't even need netlink, just if_nametoindex to
> > convert interface name strings to scope ids, or alternatively (does
> > this work for v6 in the absence of an explicit scope_id?) one or more
> > local addresses to bind and send from.
> >
> 
> I definitely would avoid putting local addresses in the config, because it
> would break for any non-static addresses like DHCP or v6 RAs. The interface
> name would require walking the getifaddrs list to map it to a corresponding
> source address but it would work if the interface name is stable.

I could see it being desirable to support both. For some folks, IP
addresses would be more stable; for others, interface names would.

Another variant would be letting you identify the network(s) to send
mDNS to via destination addresses, ala the "connect a UDP socket to
get a local address" technique you described above.

For IPv6 these are kinda equivalent. An address of fe80::%ifname (or
maybe fe80::1%ifname?) should let you specify "interface ifname" just
as a destination address.

I'm not necessarily saying we should do any or all of these, just that
they're possibilities to consider.

> I guess we're looking at two ways to go about this:
> 
> (1) the simpler but less clean option - where we key off of "resolver !=
> 127.0.0.1" - very limited code size change, but only handles a small subset
> of scenarios

I _really_ don't like this option. It violates least-surprise,
overloads one setting to mean something else based on a presumed usage
pattern, and worst of all, it makes a behavior whereby enabling DNSSEC
validation (by running your own validating nameserver on localhost)
breaks something else (mDNS) which will disincentivize DNSSEC
validation.

This is really the hidden cost of overloaded knobs in general: they
make situations where users are encouraged to change one setting in a
way they don't want because that's how they get the other thing they
do want.

> (2) the cleaner option that involves more work - new config option, need
> multiple sockets - would be cleaner design-wise, but would change quite a
> bit more code
> 
> Another aspect to consider is the fact that in a lot of cases resolv.conf
> is overwritten by various components like NetworkManager, so we'd need to
> modify them to also understand the option.

I think they already have support for pulling options from somewhere.
But that's a good question worth looking into: how you'd get the
necessary config in place, in the presence of
nm/dhdpcd/resolvconf/whatever overwriting resolv.conf.

> I'm always in favor of doing the right thing, unless the right thing ends
> up being so much effort that it doesn't happen. Then I'm a fan of doing the
> easy thing ;-)

Around here the fallback usually is "not doing it at all". :-p

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.