Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 17 Feb 2024 13:45:34 -0500
From: Rich Felker <dalias@...c.org>
To: g1pi@...ero.it
Cc: musl@...ts.openwall.com
Subject: Re: dns resolution failure in virtio-net guest

On Sat, Feb 17, 2024 at 12:08:12PM +0100, g1pi@...ero.it wrote:
> 
> Hi all.
> 
> I stumped on a weird instance of domain resolution failure in a
> virtualization scenario involving a MUSL-based guest.  A little
> investigation turned out results that are puzzling, at least for me.
> 
> This is the scenario:
> 
> Host:
> - debian 12 x86_64
> - kernel 6.1.0-18-amd64, qemu 7.2
> - caching nameserver listening on 127.0.0.1
> 
> Guest:
> - void linux x86_64
> - kvm acceleration
> - virtio netdev, configured in (default) user-mode
> - kernel 6.1.71_1, musl-1.1.24_20
> - /etc/resolv.conf:
>     nameserver 10.0.2.2         the caching dns in the host
>     nameserver 192.168.1.123    non existent
> 
> In this scenario, "getent hosts example.com" consistently fails.
> 
> The problem vanishes when I do any of these:
> - strace the command (!)
> - replace 10.0.2.2 with another working dns across a physical cable/wifi
>   (e.g. 192.168.1.1)
> - remove the non-existent dns
> - swap the nameservers in /etc/resolv.conf
> 
> I wrote a short test program (see below) to perform the same system calls
> done by the MUSL resolver, and it turns out that
> 
> - when all sendto() calls are performed in short order, the (unique)
>   response packet is never received
> 
>     $ ./a.out 10.0.2.2 192.168.1.123
>     poll: 0 1 0
>     recvfrom() -1
>     recvfrom() -1
> 
> - if a short delay (16 msec) is inserted between the calls, all is fine
> 
>     $ ./a.out 10.0.2.2 delay 192.168.1.123
>     poll: 1 1 1
>     recvfrom() 45
>     <response packet>
>     recvfrom() -1
> 
> The program's output is the same in several guests with different
> kernel/libc combinations (linux/glibc, linux/musl, freebsd, openbsd).
> Only when the emulated netdev was switched from virtio to pcnet, did
> the problem go away.
> 
> I guess that, when there is no delay between the sendto() calls, the
> second one happens exactly while the kernel is receiving the response
> packet, and the latter is silently dropped.  A short delay before
> the second sendto(), or a random delay in the response (because the
> working dns is "far away"), apparently solve the issue.
> 
> I don't know what the UDP standard mandates, and especially what should
> happen when a packet is received on a socket at the exact time another
> packet is sent out on the same socket.
> 
> If the kernel is allowed to drop the packet, then the MUSL resolver
> could be modified to introduce some minimal delay between calls, at
> least when retrying.

UDP is "allowed" to drop packets any time for any reason, but that
doesn't mean it's okay to do so in the absence of a good reason, or
that musl should work around bugs where that happens, especially when
they're not a fundamental part of Linux but a particular
virtualization configuration.

I suggest you run tcpdump on the host and watch what's happening, and
I suspect you'll find this is qemu's virtio network being... qemu. It
probably does not do any real NAT, but directly rewrites source and
destination addresses so that your local caching DNS sees *two
identical queries* (same source/dest host/port combination, same query
id) and treats the second as a duplicated packet and ignores it. Or it
may be something different, but at least inspecting the actual network
traffic coming out of the qemu process will tell you what's going on.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.