Issue No. 01 - January/February (2007 vol. 5)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MSP.2007.18
Ross Anderson , Cambridge University
We must first agree that software security is not security software," Gary McGraw writes in the first chapter of his new book, Software Security: Building Security In. Spot on! Things break because software is just about everywhere, and we rely on it for just about everything; we had software before the Internet, but we couldn't have the Internet until we had software. Software has bugs, and some of them cause vulnerabilities. Trying to compensate for bugs by adding a layer of special security software can only get you so far—often not far enough.
McGraw's book starts off with a chapter that introduces bug metrics—which is very welcome, as the study of vulnerability statistics has been one of the new and interesting fields of security research in the past few years.
The second section goes through seven security "touchpoints"—components of an assurance program. McGraw lists these in descending order of importance as code review, architectural risk analysis, penetration testing, risk-based security testing, abuse cases, security requirements, and security operations.
This ranking got me thinking. My first reaction was disagreement: security needs to be engineered into a system from the start, so you have to begin with the abuse cases, derive the security policy, refine that into security requirements, define the architecture, and take it from there. Quite a few times I've worked on an early electronic version of an existing application in which initial attempts at security failed because the designers hadn't stopped to think what it actually meant for them. Is the main threat to privacy or to safety? Are the likely bad guys insiders or outsiders? There are also practical and political aspects to building the security in from the requirements stage—if you wait until the code is almost ready to ship and then point out that it needs extensive rewriting, you'll be unpopular or ignored.
Several things that I'd have put in a security requirements chapter turn up early in his doxology under "architectural risk analysis." Although his book doesn't emphasize the requirements analysis that would be prudent for software security engineering in a completely new application, it's quite workable for engineers working on a fairly well-understood problem such as writing the next version of an operating system or a bank-accounting package.
Software Security provides some practical guidelines on how to change the business culture of a software team that produces insecure code (stop the bleeding, harvest the low-hanging fruit, establish a foundation, and so on). It has extensive lists of abuse cases, and finishes up on a strong note with a massive taxonomy of coding errors.
McGraw's risk-based approach to software life-cycle management also runs nicely parallel to best practice in safety-critical systems, which I expect will become increasingly important; as potentially lethal devices such as automobiles acquire increasingly more code, security vulnerabilities will become safety hazards. Firms will need a unified approach to managing software safety and security together.
Overall, this was the best new security book I've read this past year. It certainly made me think more than any other security book I've read recently. Stepping aside from the question about whether you look for bugs first in the specification or in the code, McGraw's book is clearly going to become one of the classics; I expect it will stay on my near shelf and become a well-thumbed reference for the serious practitioner.
Ross Anderson is a professor of security engineering at Cambridge University. His research interests include security economics, filtering systems, and cryptology. Anderson has a PhD in computer science from Cambridge University. Contact him at firstname.lastname@example.org.