|
Message-ID: <20170705215814.4wyzvq2deid4ln7q@perpetual.pseudorandom.co.uk> Date: Wed, 5 Jul 2017 22:58:14 +0100 From: Simon McVittie <smcv@...ian.org> To: oss-security@...ts.openwall.com Subject: Re: systemd fails to parse user that should run service On Wed, 05 Jul 2017 at 22:03:45 +0200, Pali Rohár wrote: > The worst is that fact that discussion about this problem was locked in > upstream bugtracker. Therefore there is no other option as continue > discussion about this, which I think security issue, here at > oss-security list. systemd does have a (public, and publically-archived) mailing list, which has a current thread on the subject of this issue. In particular the mail in that thread from Felipe Sateler, and some of the discussion on the upstream bug, touches on reasons why neither "if anything is not as expected, reject the whole unit" nor the current behaviour is right. I suspect the resolution is likely to be something in between. I agree that it's a bug that a "syntactically invalid" User is handled the way it is, because the result does not follow the principle of least astonishment, and it would be easy for it to have bad consequences. (Please don't try to convince me that the current behaviour is a bug. I already think that, and I have no more influence over systemd's behaviour than you do.) However, (the relevant part of) systemd is pid 1, executing commands defined by system-wide-installed files, with the highest possible privileges. It makes no claim to be designed to process untrusted units safely, and it would be foolish for a component in its position to make that claim. In a sense it's a specialized interpreter, for a language that happens to be partly declarative rather than entirely imperative. If someone you don't trust gives you a systemd system unit, it needs to be checked just as carefully as a traditional (e.g. LSB) init script, because it can do all the same powerful and dangerous operations that the init script can (dangerous is just another word for powerful, and vice versa). Not every bug is a security vulnerability (not even the really bad ones). At the moment, there is a strong correlation between security vulnerabilities with CVE IDs, and issues for which there is consensus among relevant upstream and downstream developers that the issue is in fact a vulnerability for which a prompt security update is necessary. I'm becoming concerned that if the working definition of a vulnerability gets stretched too far towards things that are "just a bug", it will reduce the perceived importance of fixing CVEs promptly, harming the overall level of security in software. On Wed, 05 Jul 2017 at 13:27:17 -0700, Alan Coopersmith wrote: > Honestly, given the level of flaming and trolling that happens on issues > like this, locking the report is the only sane option I can see once > everyone started piling on. Forcing FOSS maintainers to accept infinite > amounts of shitposting is a horrible way to reduce security by burning > out all FOSS maintainers quickly and leaving software abandoned. I have little to add to this, but I couldn't resist a "me too" here, because I think Alan's point is very important. Maintainers can't be expected to behave in a professional and effective way if their working environment is consistently hostile. Using something with as large a user-base as Github for bug tracking makes it very easy for people to contribute their comments to bugs, which is great as long as those comments are helpful (remembering that a bug tracker is there to make the tracked software better, not to make its users feel better). When the comments become unconstructive, maintainers need to have the tools to manage them, and locking bug reports is one of those tools. S
Powered by blists - more mailing lists
Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.