The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - Nov.-Dec. (2012 vol.10)
pp: 17-19
Published by the IEEE Computer Society
Dan Thomsen , SIFT, LLC
Jeremy Epstein , SRI International
Peter G. Neumann , SRI International
ABSTRACT
Is the computer security field really old enough to have lost treasures? Will a granite punch card with ancient Cobol contain some code fragment that produces a better firewall? Hardly. The computing environment changes so much and so radically that implementation details lose relevance quickly. However, concepts and key insights can serve modern developers just as well today as they did builders of ancient systems from the 1980s.
Is the computer security field really old enough to have lost treasures? Will a granite punch card with ancient Cobol contain some code fragment that produces a better firewall? Hardly. The computing environment changes so much and so radically that implementation details lose relevance quickly. However, concepts and key insights can serve modern developers just as well today as they did builders of ancient systems from the 1980s (for instance, see the "Lost Lessons: Election Systems" sidebar). New security practitioners' conference submissions often reinvent key security concepts, highlighting the need to uncover these lost gems. The future should build on the past, not constantly reinvent it.
Economics and Archaeology
Eras of rapid change foster great economic opportunities as people continually adapt to the changing environment. All this money shifts the industry's focus to getting the next release out the door. In this flurry of change, it's easy to lose track of key concepts. In contrast, look at mathematics. Often, centuries elapsed before someone found a useful way to apply a new theorem.
So what can we do for computer security in this era of rapid change? We could have people pore over old government documents, corporate reports, and papers looking for lost gems. The payoff remains comparable to traditional archaeology: you have to move a lot of dirt before you find something worth keeping. For example, the "Rainbow Series"—a set of government computer security guidelines ( http://csrc.nist.gov/publications/secpubs/rainbow)—contains a lot of good computer security science but is encased in the strata of multicolored evaluation criteria. And despite our gentle cajoling, no one wrote an article for this issue on the science of the "Orange Book"—the cornerstone of the Rainbow Series that addresses building a secure operating system—possibly because it would require significant effort. 1
In traditional archaeology, people look for artifacts to determine how people lived in the past. By looking back at security papers, we can determine how people thought about computer security. Paul Karger and Roger Schell pointed out, 2 and Ken Thompson made famous in his article "Reflections on Trusting Trust," 3 that a Trojan horse can latch itself onto a compiler and duplicate itself in all the applications compiled with the compiler, even new versions of the compiler. This was far-fetched thinking at the time. In addition, David Elliot Bell and Leonard LaPadula determined the need for the operating system to enforce properties so users' agents can't copy information to a lower classification and learned that although we can trust humans not to downgrade information because they're cleared, a Trojan horse working on a user's behalf could. 4 When Bell and LaPadula discovered this key property, they didn't even name it; they identified it with "* property," where the asterisk represents a placeholder for some future name. Yet this unnamed property still drives high-assurance systems. Both these articles shifted the way we thought about computer security, which was much simpler before these articles were published. Yet, today we still have trouble getting new security practitioners to believe just how tricky attackers can be. If you work in computer security long enough, you'll hear people say, "no one would do that" or "where does this paranoia end?" However, if you can imagine a way to compromise a system, attackers can implement it.
In This Issue
If we can recreate that "Eureka!" moment in new security practitioners, we can foster a computer security science that builds on the past rather than reinventing it for the latest new fad. The articles in this issue attempt to do this.
In "A Contemporary Look at Saltzer and Schroeder's 1975 Design Principles," Richard E. Smith tracks the use and evolution of the concepts from Jerome Saltzer and Michael Schroeder's seminal 1975 article, "The Protection of Information in Computer Systems." 5 As the technology and political climate changed, the way people applied and restated these principles likewise changed.
Computer security science is really a science of engineering, constantly trading functionality and security to mitigate system risk. In "Lessons from VAX/SVS for High-Assurance VM Systems," Steve Lipner, Trent Jaeger, and Mary Ellen Zurko give us a blow-by-blow account of the engineering insights and decisions from building a high-assurance operating system in the early 1990s.
What if security research got a "do-over"? Howard Shrobe and Daniel Adams ask and answer this question in "Suppose We Got a Do-Over: A Revolution for Secure Computing," looking at DARPA programs designed to examine security without the economic shackles of compatibility requirements that current systems face.
How do we know when we achieve security? Evaluation and certification are supposed to be the yardsticks for measuring it, but they can fail just like any other system component. Steven J. Murdoch, Mike Bond, and Ross Anderson examine the certification standards from the 1970s on and trace their evolution and impact on deployed security in "How Certification Systems Fail: Lessons from the Ware Report."
Finally, Jeffrey T. McDonald and Todd R. Andel look at how key insights can improve information assurance education in "Integrating Historical Security Jewels in Information Assurance Education." Starting from 1883, they examine system design principles that every security practitioner today should know.
To round out this issue, we sent Earl Boebert, a long-term computer security practitioner, out with his trusty shovel to comb the sands of information security for lost gems of computer security. He filled his sack up in no time. Boebert's The Fox Herders Guide: How to Lead Teams That Motivate and Inform Organizational Change (Bitsmasher, 2011) contains many of these gems. Others come from Rick Proto, who was director of research at the National Security Agency early in Boebert's career. Rather than providing a single list of these gems, we've scattered them throughout the issue. The reader will benefit from deeply considering each of them.
Happy hunting!
Each of these treasures presents a key shift in a security practitioner's thinking. We need to pinpoint these shifts, not as a matter of historical documentation, but because new security practitioners need to undergo those same shifts in thinking to produce the next wave of computer security engineers.

References

Dan Thomsen is a principal researcher at SIFT, LLC. Contact him at d.j.thomsen@ieee.org.
Jeremy Epstein is a senior computer scientist at SRI International. Contact him at jeremy.j.epstein@gmail.com.
Peter G. Neumann is principal scientist at SRI International's Computer Science Lab. Contact him at -neumann@csl.sri.com.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool