Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875xbtlf4z.fsf@hope.eyrie.org>
Date: Sat, 01 Nov 2025 12:35:56 -0700
From: Russ Allbery <eagle@...ie.org>
To: oss-security@...ts.openwall.com
Subject: Re: Questionable CVE's reported against dnsmasq

Solar Designer <solar@...nwall.com> writes:

> I don't think a "check that the config file is root-owned and not
> user-writable" would be relevant since a maybe-relevant threat model
> involves config files intentionally created by other software such as a
> web UI, which would set permissions such that the file is processed, and
> since such checks are uncommon and the lack of them does not mean the
> software supports untrusted config files.

> Other than that, I see that this gets tricky for a CNA to evaluate
> without input from the maintainers, so I may have been unnecessarily
> harsh on VulDB.

This is a bit of an "ask the Lazyweb" question since I have done only
minimal research, but is there any way for me to declare, as the software
maintainer, what I consider to be the security boundaries of the software
in a way that can be at least partially machine-readable? I know there are
tons of modeling languages for *building* software, imposing or checking
access control, etc., but is there a way for me to *label* a free software
project to communicate information such as "edit access to the
configuration file is arbitrary code execution by design"?

It feels like this problem is arising regularly with automated and
semi-automated security testing and fuzzing, and there are regular
complaints about security "bugs" that the maintainer considers meaningless
because they don't cross a privilege boundary in the maintainer's model,
and then endless disputes about edge-case usage where no, actually, that
is a security boundary.

I don't think the argument over what the security boundary should be in
the abstract is winnable; there will always be someone who disagrees. But
documentation of the *maintainer's* intended security boundary is an
objective fact about the software maintenance practices. If the maintainer
says "if you can write to the configuration file / inject arbitrary
command line parameters / control the input to the program, the program
will execute arbitrary code and this is by design and I'm not going to
change it," this feels like useful information for both users and security
researchers. Or even if the answer is that weird behavior in that scenario
will be considered a bug but not a security issue, and therefore won't be
treated with much urgency, won't result in a new software release when
fixed, won't be backported, etc.

One can disagree with the maintainer and try to change the maintainer's
mind, but failing that, if you want a different security model than what
the software declares it supports, the answer is to use a different piece
of software (such as a fork) or enforce the security boundary yourself
somehow, not to file a CVE.

It would be really nice if the maintainer could somehow declare this in
such a way that CVE issuers could retrieve that declaration and check the
CVE report against it, ideally in a semi-automated fashion. I say "semi"
because I think a human will have to be involved to some extent, or the
expression language problem will be too hard, but it would be nice if
automation could take a reliable first cut at filtering things down to the
bits a human has to look at.

Beyond the perpetually-discussed case of configuration files, since there
are indeed some programs that consider configuration file parsing to be a
security boundary and treat failure to safely parse an attacker-controlled
configuration file as a security bug, this would also provide a way to
represent the difference between (to exaggerate for clarity) a command to
do malware scanning (should be runnable on arbitrary untrusted input) and,
say, "bash" or "python" (will never be possible to run on arbitrary
untrusted input by design).

I can of course stick such a statement in the documentation and the
security bug reporting instructions and so forth, and that would be a good
start and I'm not doing that in all the places that I should, but if we
could agree on a language for representing this, that feels a bit more
satisfying. Part of the ongoing problem is a constant fight over
definition of terms, so having some pre-defined terms feels useful.

-- 
Russ Allbery (eagle@...ie.org)             <https://www.eyrie.org/~eagle/>

Powered by blists - more mailing lists

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.