|
Message-Id: <200206250011.UAA09487@linus.mitre.org> Date: Mon, 24 Jun 2002 20:11:56 -0400 (EDT) From: "Steven M. Christey" <coley@...us.mitre.org> To: solar@...nwall.com CC: owl-users@...ts.openwall.com, cve@...re.org, security-audit@...ret.lmh.ox.ac.uk Subject: Re: Classifying Vulnerabilities for Risk Assessment Solar Designer said: >I've been trying to provide a vulnerability Severity rating in Owl >change logs whenever there's a security fix applied to Owl. > >[and lots of other stuff] > >Ideally, I would like to define a classification scheme that would be >usable by more than just Owl and more than just OS distributions, but >also by various vulnerability databases (CVE, SecurityFocus', others). Just to clarify, we have no plans for CVE to include a risk assessment value, mainly because (a) it's subjective and (b) it's the role of vulnerability databases. (While CVE is sometimes used as a database, that's not its intended function; and we also try to minimize "competition" with existing databases.) However, I've also been interested in building a common means of risk assessment that's "judgment free" since your high risk is my medium risk, etc. Chris Wysopal and I hope to make some proposal as part of our guidelines for writing security advisories. (Still in process, but it's moving along). So, your post is rather timely. Below are the main elements that I've been playing around with. When combined, they seem to do a fairly good job of describing issues in a way that allows the "consumer" to interpret risk. This has been "vetted" (heavy quotes there) for a few hundred vulnerabilities, and it seems to be able to describe about 90% or more of them. Interactions between multiple products, or "weird" security models, don't lend themselves so cleanly to this type of measurement. And configuration, which can vary widely, is a headache. I like the notion of discriminating between default installs, typical installs, and "best practice" configurations, but that adds another layer of complexity and potential confusion. As a vendor using "standard" criteria for measuring risk, you could then explain how you assign your own risk values based on these characteristics. E.g.: "I always interpret remote unauthenticated root as 'high' risk." #EXTENT: [r]emote unauthenticated, [l]ocal, [a] remote authenticated, # [p]hysical access, [m]ixed, [u]nknown/unspecified, # [v]endor or maintainer only # - could consider "arbitrary local" versus "restricted local" (e.g. # if something requires at least kmem privs to exploit) One question is whether to record a notion of "scope" - is the access/damage limited to a specific application ("you can get bulletin board admin privileges"), a system, or the network? Another issue is how to define local vs. remote. There are varying definitions. Say you have an FTP server vulnerability that's only exploitable by authenticated users (say, in a LIST command). Is that remote (because you're using a network protocol) or local (because you're an authenticated user)? What about if "anonymous/guest" users can exploit it? #SEVERITY: [r]oot/admin exec, [u]ser exec, [rf] root file, [uf] user file, # [aa] app admin, [au] app user, [d]enial of service, [i]nfo leak, # [b]ypass ACLs or tracking, [s]poof, [p]rivacy, [m]isc/unknown One abstraction here that should be noted is that I don't distinguish between being able to read vs. write or corrupt files. The general result is "being able to perform some unauthorized action on a file." Since "severity" is itself a value judgment, calling it "result" may be better. Information/privacy leaks are fairly broad. What is it that's leaked - application, system, network data? Is it configuration information, financial information? There are also varieties of DoS that might be nice to discriminate - e.g. is the DoS limited to a specific application/system; and how long does it last - just during the attack, or does it require a "hard reset"? Etc. #ATTACK: [p]assive, [a]ctive, [m]ixed (e.g. passive race), [u]nknown # (might want to consider timing: is the result instant, or # dependent on another person's actions, and if so, what is # the likelihood of those actions being performed?) An example of a "mixed" attack would be a script injection issue on a bulletin board. Technically it's passive because it relies on the actions of others, but the bulletin board is expected to be visited, so the likelihood of success may be higher than a link in an email. Or say you've got a symlink issue where the filename is fixed, and the vulnerable product is normally expected to run on a regular basis. Other factors I'm looking into, although these change with time: #EXPLOIT: [f]ull, [p]artial, [m]inimal, [n]one # **** consider [c]ut-and-paste and/or [s]ufficient This is: "what exploit details are known?" [c]/[s] are basically "there's enough information for someone to build an exploit if they want to without trying to track down the specific issue." E.g. a CGI shell metacharacter problem where you know the name of the parameter that's exploited. It may also be useful to categorize what sorts of workarounds or resolutions are available: #WTYPE: (workaround types) [P]atch (unofficial), [D]isable feature, # [F]ilter/restrict access, [N]o workaround besides complete # removal/disabling entire product, [S]ufficient workaround # to prevent problem without seriously impacting product #RTYPE: (resolution type): [op] official patch, [up] unofficial # patch, [fw] workaround (filter), [dw] workaround (disable), # [sw] sufficient workaround to prevent problem without # impacting operation, [na] no action, [e] initial report # was erroneous, [d] initial report was duplicate, [i] # inconclusive replication, [d] vendor disputes issue, # [v]ersion upgrade These 2 should probably be combined. I'm not really using them yet. Just some rough ideas... - Steve
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.