[Note: this blog post is a commentary and does not necessarily reflect the opinion of the Communication Systems Group.]
I have just returned from MetriSec 2012, which was a complete success in my opinion. Peter Gutmann delivered an excellent keynote, the participants had a great roundtable discussion, and the refereed papers and the invited talk were of high quality.
I did, however, have a point of (passionate but polite) disagreement with Riccardo Scandariato. As far as I could tell, Ric advocated the use of the axiomatic method in security metrics, by which he meant defining the properties of a security metric beforehand and axiomatically and then looking for actual metrics that satisfy the axioms. I am assuming that this is not mere requirements engineering, which is of course a sensible thing to do, but an honest-to-god axiomatic approach. The reasoning behind this is that usually, you poke around in the dark, take whatever metrics you find useful, and then figure out the properties of the metric you found. This will usually not lead to metrics that have desirable properties; therefore much energy is wasted because you have to throw away many metrics. Another plus is that you can build a theory of metrics and find out their properties simply by following the axioms, combining them, and seeing where logic takes you.
All of this is true, but the axiomatic method simply cannot be used for security metrics. Axiomatic methods can be used for mathematical objects, but what we have here looks like a mathematical object, but it doesn’t quack like one. The reason is that security is not an abstract property, but a property that only holds in the real world, because it is intimately connected with actual machines, an actual environment, and actual humans. If you axiomatise, you might (and, I would argue, will) discard metrics that tell you something useful about the system under consideration, simply because they don’t have some nice theoretical, axiomatic properties. In fact, I would argue that because details matter in security, a metric that is good for one system might be complete rubbish for another, similar system. Axiomatisations, being necessarily rather short, would be unable to find the minute differences between systems that decide on the usefulness of a particular metric.
I will go even further and bet that for every (short and general) axiomatisation that Ric finds, I can come up with (a) a metric that satisfies the axioms, and (b) a plausible system for which this metric is rubbish.
But even if I could not find counterexamples to an axiomatisation: no discipline that I know, and in which measurement plays a role, uses an a priori axiomatisation to find metrics. Physics, Chemistry, Mechanical Engineering, and even Psychology and other softer sciences use the “poke around in the dark” approach until they find something that gives useful results. I believe that the reason is that you need a good theory of the thing you’re studying before you can come up with a reasonable axiomatisation. For example, Newton’s laws can be used as axioms, but they were empirically observed facts before. In security, we simply do not have such a body of knowledge that would allow us to formulate a theory of secure systems. Trying to find metrics this way seems to me to put the cart before the horse.
Case in point: Riccardo mentioned that axiomatic approaches could be used to find out composition rules for metrics. That means that if I have a metric for system A and one for system B, and if I know how the two systems are to be put together, I can then find a metric for this combination of A and B. Now, security is a property that is at best brittle with respect to composition. (At the conference, I actually said that “security is not composable”, but this is of course wrong; what I meant, and what I should have said, is that security is not necessarily composable.) The composition of A and B might well have security properties that are neither a function of A, nor of B, nor of the particular manner in which they are connected. Details matter, and the environment matters. I find it impossible to believe that a reasonably general axiomatisation of such a composition could be small enough to lead the search for metrics in a useful direction.
A perfect example of this can be found in a paper that takes thirty security protocols and combines them. From the abstract: “Formal modeling and verification of security protocols typically assumes that a protocol is executed in isolation, without other protocols sharing the network. We investigate the existence of multi-protocol attacks on protocols described in literature. Given two or more protocols, that share key structures and are executed in the same environment, are new attacks possible? Out of 30 protocols from literature, we find that 23 are vulnerable to multi-protocol attacks.” (Cas Cremers, Feasibility of Multi-Protocol Attacks, First International Conference on Availability, Reliability and Security (ARES’06), April 2006, p.287. Hat tip: Peter Gutmann.)
The only way to find good security metrics is to study actual systems in great detail, not to invent axioms. That is an activity best left to mathematicians.
[Edit: Changed “Riccardo wants axiomatically to find metrics that can be composed” to “Riccardo mentioned that axiomatic approaches could be used to find out composition rules for metrics” Thanks for the clarification, Ric!]