Issue No.01 - January/February (2009 vol.7)
Published by the IEEE Computer Society
Jaynarayan H. Lala , Raytheon Company
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MSP.2009.11
An IT monoculture occurs when a large fraction of the computers in computational ecosystem run the same software. By automatically creating diversity, the risks of deploying an IT monoculture are reduced so that it becomes difficult for a single malware vector to wreck havoc. Two of the articles in this collection discuss techniques to create this diversity; a third article explores the risks of monocultures in the likely context of threats that are known or might be anticipated today.
Monocultures—systems running substantially the same software—are quite common today in business, government, and the networked information systems that increasingly control our critical infrastructures. The advantages of
embracing an IT monoculture include easier management, fewer configuration errors, more portable user skills, and interoperability. Yet the computers comprising an IT monoculture will, by definition, share vulnerabilities, which puts the entire networked system at risk of a rapidly spreading virus or other malware vector. This special issue of IEEE Security and Privacy provides in-depth coverage of the various facets of IT monocultures and surveys a few promising approaches to eliminating vulnerabilities.
The first article, by Fred B. Schneider and Kenneth P. Birman, argues against the conventional wisdom that software monocultures are necessarily bad. The authors base this contention on an analysis of attacker reactions likely to be evoked by successive generations of defenses, and they suggest that deploying a monoculture in conjunction with automated diversity is indeed a very sensible defense in the current threat environment. The authors group attacks into three classes: attacks against configuration, against technology, and against trust. A monoculture defends against some configuration attacks, for example, but opens new vulnerabilities to technology attacks; employing artificial diversity here will defend against some of those technology attacks but could increase vulnerability to trust attacks; and so on.
The next two articles focus on automated diversity. Safety-critical applications, such as control systems for spacecraft, aircraft, railways, and nuclear power plants, have long used design diversity in creating software replicas to guard against single-point failures in redundant systems. This diversity (either recovery block or N-version programming) is typically applied at a macro level—that is, the complete software application suite is replicated, with each replica assigned to an independent software development team. The price of this approach is significant additional development costs. In contrast, automatically produced micro-level diversity attempts to address the problem at a significantly lower cost.
The article by Angelos D. Keromytis describes one such approach, instruction set randomization (ISR), which is a general technique for protecting against code-injection attacks. Such attacks represent a significant fraction of reported attacks in bug- and incident-tracking repositories over the past decade. ISR provides for a separation of code from data by randomizing legitimate code's execution environment, in which the transformations are based on a key shared with the execution environment. The article describes ISR's use in two different domains—binary code-injection and SQL-injection attacks—its limitations, and future directions for improvements and applications.
The second article, by Daniel Williams, Wei Hu, Jack W. Davidson, Jason D. Hiser, John C. Knight, and Anh Nguyen-Tuong, describes the Genesis software development tool chain. An innovative aspect of Genesis is the use of application-level virtual machine technology, which enables the application of diversity transforms at any point in the software tool chain. The authors conclude that Genesis helps provide a practical and effective defense against two widely used types of attacks—return-to-libc and code injection.
Research in using artificial diversity has most recently been funded by US Defense Advanced Research Project Agency's (DARPA's) Organically Assured and Survivable Information Systems and Self-Regenerative Systems programs, by the US Air Force Office of Scientific Research, and the US National Science Foundation's (NSF's) Trustworthy Computing initiative. The articles in this special issue show both the promise of this new feature on the security landscape, as well as its limitations. Much basic research remains to be done before the techniques will be sufficiently mature for transition and deployment in everyday IT systems, because diversity invariably complicates configuration management. Another area ripe for exploration is extending system architectures and hardware to utilize and support diversity techniques for intrusion tolerance—an ability of the system to not only prevent malware execution (by crashing the system or stopping execution as current techniques do) but to continue providing critical functionality by invoking alternate versions of application software. This is particularly challenging given stateful systems. Diversity can also be applied to other IT components besides software, such as data, protocols, firmware, and hardware, as has been done in some cases for safety-critical systems at a macro level. New challenges here include exploring applications of diversity to these entities at a finer grain, while minimizing the impact on cost and configuration management.
We thank all those who submitted articles to this special issue and the external reviewers for their multiple rounds of thorough reviews, comments, and suggestions. We're also grateful to editor-in-chief Carl Landwehr for suggesting the original idea for this special issue.
Jaynarayan H. Lala is a senior engineering fellow at Raytheon Company. He was a program manager at DARPA from 1999 to 2003, where he initiated and executed multiple programs in information assurance and survivability. His research interests include mission- and safety-critical systems and the practice of systems engineering applied to large, complex systems. Lala has an ScD in an interdisciplinary field of instrumentation from MIT. He is a fellow of the IEEE and an associate fellow of the AIAA. Contact him at email@example.com.
Fred B. Schneider is a professor at Cornell University's computer science department and chief scientist for TRUST, a multiuniversity, NSF-funded Science and Technology Center. His research focuses on understanding and supporting system trustworthiness. He was named Professor-at-Large at the University of Tromso (Norway) in 1996 and has a DSc honoris causa from the University of Newcastle upon Tyne. Schneider has a PhD from Stony Brook University. He is a fellow of the ACM, the AAAS, and the IEEE. Contact him at firstname.lastname@example.org.