Come to MetriSec 2012 (Part 1)!

This post is not a technical article, but in-house advertising.  I am a proud co-chair of MetriSec 2012, an international workshop on security metrics and related topics.  This year’s programme is a bit unusual. Sure, we have papers, but we also have two more big attractions: the keynote speech and a panel discussion. This post is about the keynote speech; the panel discussion will be the subject of a future post.

This year’s keynote will be given by Peter Gutmann, whose book Cryptographic Security Architecture: Design and Verification should be on every security practicioner’s bookshelf. I know Peter from way back, when we were members of the PGP 2.0 development team.  That must have been in 1992 or so. Peter is one of those rare people who have deep knowledge at magnifications spanning several orders of magnitude. You can talk to him about X.509 minutiae or about human factors in security engineering (a topic about which, in his own words, he occasionally grumbles about).

For most of us mere mortals, even a single paper at Usenix Security is a lifetime dream that often goes unfulfilled. Peter has seven papers published at Usenix Security, the first in 1996 and then in six straight years from 1998 to 2003. (One wonders what happened in 1997.) His most often cited paper is at the same time his most misunderstood: Secure Deletion of Data from Magnetic and Solid-State Memory (Usenix Security 1996) showed, first, how overwriting data on MFM- or RLL-encoded hard drives leaves enough traces of the original data to allow its eventual recovery, and, second, how to choose bit patterns that will cause the magnetic fields generated by the write head to fluctuate in just the right manner that deletion will be “deep” and recovery of the original data all but impossible. The paper gave 35 such bit patterns that resulted from an analysis of the specific data encodings, but of course, what happened later was that the article was very badly misunderstood. It’s worth quoting from Peter’s epilogue to his paper:

In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don’t understand that statement, re-read the paper). If you’re using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, “A good scrubbing with random data will do about as well as can be expected”. This was true in 1996, and is still true now.

Indeed, when you use Apple’s Disk Utility to securely delete data on a drive, you will find three options: the first option is to use one pass (probably with zeroes or pseudorandom data), the second option uses seven passes (probably a variation of DoD 5220.22-M), and the third option, called “Most Secure”, uses, you guessed it, thirty-five passes. And this despite the fact that “you never need to perform all 35 passes”.

Peter’s keynote, From Revenue Assurance to Assurance: The Importance of Measurement in Computer Security, will draw lessons from the way telecoms used metrics to bill for mobile phone usage and apply them to the field of security. I am greatly looking forward to this talk, and if you are involved with security or measurement or both, I dare say you could benefit greatly from it.

So, do consider registering to MetriSec, I’m sure it will be worth your while.

[Edited 2012-07-09: Re-worded intro, fixed small language issues.]

About Stephan Neuhaus

Stephan Neuhaus has been working in security since 1992, when he was a member of the PGP 2.0 development team. He has since been a successful entrepreneur before going back to University where he got his PhD in Software Engineering from Saarbrücken University in 2008. He is now a Senior Researcher at ETH Zurich, where he works on empirical software security in Prof. Plattner's Communication Systems Group.
This entry was posted in General. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *