Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 10 Oct 2014 10:02:13 +0200
From: ┼╣micier Januszkiewicz <>
Subject: Re: liability (was: Re: Thoughts on Shellshock and beyond)

Just my 2 cents here as for marketing software known to be vulnerable.
This is not exactly open source, but something spotted today in [1]:

"Revision 1.25
Listed the newly released Cisco Unified Communications Domain Manager
version 10.1(1) as vulnerable."

Makes you think... either this is horrible wording or...


2014-10-09 22:12 GMT+02:00 Solar Designer <>:
> I ended up writing a lengthy message (this one), but I am unsure if it's
> a good idea to have this topic discussed once again (such discussions
> had already occurred on other mailing lists years ago).  In fact, that's
> the main point I am making - while I've just spent/wasted some time on
> writing the below, maybe we should stop right here?  So if anyone has
> something new or some important historical references to add, please feel
> free to post, but I'd rather not see us digress into (I think) mostly
> irrelevant analogies in financial markets, with even more irrelevant
> detail on the French trader (referring to Sven's other posting here).
> I mention some paywalled articles below.  If anyone has URLs to free
> copies of those, please post.
> On Thu, Oct 09, 2014 at 11:11:34AM +0200, Sven Kieske wrote:
>> so at least when you're making money of software you should
>> be responsible for this software.
> That's tricky.  Is an Open Source project that accepts donations, sells
> CDs/DVDs, or/and runs ads on the website "making money"?  What if they
> also offer related paid services or even occasionally sell commercial
> licenses to the same software?  Would they be liable e.g. for up to all
> payments they ever received (or more?), even if 99.9% of the users never
> paid anything?  That may easily put them "out of business", or
> discourage them from starting the project in the first place.
> Of course, you can hope to reduce undesired effects of a new law by
> careful wording, listing categories of software it does or/and does not
> apply to, etc.  However, getting the legal system involved at all is a
> huge risk... yet you'd like to use it to reduce risk elsewhere?  The
> legal system is already akin to an over-engineered software program, and
> you're proposing to make it even more complex (more buggy, and requiring
> more resources to run).  What's worse, you don't get to write that
> "program", and you can't replace it on your "computer" with some
> alternative (short of moving to another jurisdiction, and even that
> option might disappear if the law becomes universally accepted).  You
> can request a "feature", and if the powers that be listen, they'll
> implement that "feature" in some arbitrary way that you might not like,
> yet all of us would be stuck with it.  In my opinion, this is extreme
> danger, possibly way beyond the risk from software vulnerabilities (to
> the extent that risk could be reduced by such measures).  Indeed, these
> are different types of risks, so a direct comparison of this sort may
> only make sense in specific contexts (e.g., effect on a country's
> economy or on people's quality of life analyzed in some specific way).
> I am not saying I am strictly against this approach, although that is my
> current stance given the (frankly) rather limited impact that software
> vulnerabilities actually have on us so far despite of being widespread.
> (I think the negative impact of introducing liability for software
> vulnerabilities might well be broader.)  What I am saying is that it's a
> really tough tradeoff, and that in my opinion anyone who feels confident
> about it is either wrong in being so confident or has values different
> from mine.
>> that's also not just my opinion (and I didn't invent these
>> thoughts), some credit has to go out to mr Schneier who
>> you might happen to know ;)
>> see:
> Oh, this is definitely an old topic, much older than 2003, although that
> was a year when it was discussed more.
> There were two related articles published in IEEE Security & Privacy,
> 2003 vol.1, Issue 1 - January-February:
> Two Views on Security Software Liability: Let the Legal System Decide
> Daniel J. Ryan , Law Offices of Daniel J. Ryan
> pp. 70-72
> Two Views on Security Software Liability: Using the Right Legal Tools
> Carey Heckman , Darthmouth College
> pp. 73-75
> Here's the abstract for the first one of these:
> "Rather than use the product liability screwdriver as a chisel, why not
> consider a package of more effective tools. Corporations and individuals
> that market software despite knowledge of software security flaws should
> face criminal prosecution as well as civil lawsuits with punitive
> damages. Perhaps bounties should be available for the first to discover
> and establish the existence of a security flaw. Publishers should be
> required to post to the Web and otherwise publicize promptly patch
> availability. The software equivalent of an Underwriters Laboratories
> should establish and constantly improve security-related standards and
> testing protocols. It should be made readily apparent whether a program
> has passed and at what level. Prospective customers should be educated
> and encouraged to insist on software that has passed. Stronger software
> security is important. Software developers and publishers must do
> better. But product liability is not the right legal tool for the job."
> To me, this suggests a much more limited use of the legal system: with
> liability only for continued marketing of software with security flaws
> already known to the vendors.  Even that, though, is incompatible even
> with current best practices, where vendors of any large piece of
> software almost constantly have some vulnerabilities (and fixes for
> them) in the pipeline, yet they don't stop marketing the software during
> those periods (which would be almost all the time, and would ruin their
> businesses).
> I haven't read the actual articles.  Anyone found them non-paywalled?
> This presentation appears to argue in favor of strict liability and
> backs this up with some theory:
> Open Source is briefly mentioned on slide 10.  I think it fails to
> consider (at least) the negative impact extra regulation will have on
> innovation.  To limit liability risks, a software product would need to
> be compliant with current best practices, not with potentially better
> practices that are not yet widely accepted and would necessarily be
> considered at least good by courts.
> [ While we're at it, I think the same problem - negative impact on
> innovation - applies to another old idea, insurance for IT-related risks
> including security compromises:
> Worse, besides innovation this would also discourage diverse setups -
> e.g., use of less popular operating systems, which insurers have no
> statistics on and don't want to bother with - even though these setups
> might actually be safer from actually occurring attacks (at least the
> non-targeted ones) precisely due to the diversity. ]
> Some maybe-relevant historical publications:
> Liability and computer security: Nine principles
> Ross J. Anderson
> Proceedings of Third European Symposium on Research in Computer Security
> Brighton, United Kingdom
> November 7-9, 1994
> pp. 231-245
> Here's the abstract:
> "The conventional wisdom is that security priorities should be set by
> risk analysis. However, reality is subtly different: many computer
> security systems are at least as much about shedding liability as about
> minimising risk. Banks use computer security mechanisms to transfer
> liability to their customers; companies use them to transfer liability
> to their insurers, or (via the public prosecutor) to the taxpayer; and
> they are also used to shift the blame to other departments (we did
> everything that GCHQ/the internal auditors told us to). We derive nine
> principles which might help designers avoid the most common pitfalls."
> A related thought is that if vendors would be liable for software
> vulnerabilities, they'd want to proactively turn those into
> non-vulnerabilities as far as the law is concerned, by proactively
> transferring liability onto other parties (their customers, etc.)
> Right now, they can do it in EULA.  If they would be legally barred from
> that, they might find other ways.  For example, rather than say "We
> disclaim all liability" in the EULA, the program could have e.g.
> execution of arbitrary code from e-mail messages described as intended
> behavior that the user authorizes.  OK, perhaps that would impact the
> vendor's sales, if their competitors don't do the same.  But possibly
> they'll find smarter ways, including those that would use "computer
> security mechanisms" (as the above abstract says) - just not those
> protecting the users, but rather those helping transfer liability away
> from the vendor.
> Some older publications I happened to find, apparently by lawyers:
> Product Liability and Software
> Michael C. Gemignani
> Rutgers Computer & Tech. L.J.
> 1980-1981
> p. 173
> The Business Lawyer
> Vol. 26, No. 4 (April 1971)
> pp. 1081-1094
> Unfortunately, these are also paywalled.
> Most discussion on infosec mailing lists that I can find now is from the
> period 2000 to 2005, although I vaguely recall seeing this topic brought
> up in 1990s as well.
> Alexander

Powered by blists - more mailing lists

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.