The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - November/December (1999 vol.3)
pp: 52-54
Published by the IEEE Computer Society
If history is any indication, the information technology community is incapable of constructing networked information systems that prevent unauthorized intrusions and abuse. Even organizations that take computer security seriously remain susceptible to attacks of many forms, as has been demonstrated by several significant penetrations into U.S. Department of Defense (DoD) computers. Denning relays an account of hackers from the Netherlands who penetrated 34 American military sites on the Internet, including military supply systems, that were supporting Operation Desert Storm. 1 Between April 1990 and May 1991, the attackers gained information about the exact locations of U.S. troops, their weapons, and the movements of U.S. warships. They even gained the ability to manipulate military supply systems, and to change what supplies were shipped to what locations. More recently, this year DoD computers have been the target of a focused attack—dubbed "Operation Moonlight Maze"—originating from Russia. Publicly confirmed damage due to the attack has been significant. 2
Vulnerability of Networked Systems
Numerous factors contribute to the vulnerability of our networked information systems. First, the dramatic increase in network connectivity during this decade has enabled attacks to be conducted from a distance, across many administrative domains, and often anonymously.
Second, the state of software engineering practice, particularly with respect to software for Internet applications, largely ignores issues of assurance. Most software today is "assured" by the penetrate-and-patch approach—when someone finds a vulnerability, the software manufacturer issues a patch. This approach has proved inadequate but is economically attractive for the manufacturer. When such software is executed on the same system with other software similarly "assured," further vulnerabilities can be introduced.
Third, poor administration practices and tools can result in a system remaining susceptible to vulnerabilities even after appropriate patches have been issued, and can introduce additional vulnerabilities into what would be an otherwise relatively secure system.
Lastly, even a system that has been well hardened to outside attack can be undermined by a legitimate user through operational errors or intentional manipulation. Since there is little reason to believe that these factors will be eliminated in the near future, we need to rethink how we build networked information systems.
Protecting Critical Services
Survivability has emerged as a new requirement for these systems. The term "survivable" refers to distributed architectures that can continue to meet application requirements despite successful penetrations into component computers. Survivability is an especially important requirement given that increasingly critical applications are being migrated to interconnected networks. Examples include the computers that run the U.S. electric power grid and some 911 telephone systems, both of which were found in a 1997 exercise by the National Security Agency to be reachable from the Internet and vulnerable to attacks that could result in severe degradation. 3,4
Survivability is an issue at all technological levels of a networked information system. At the network level, where delivery of packets to their intended destination is the service to be preserved, survivability may involve redundant routing of network packets to the destination via multiple disjoint routes. 5 In this way, an attacker that overtakes a network router cannot prevent the packets from reaching their destination. Above the network level, a distributed storage facility might employ redundant storage to detect or mask the manipulation of individual data stores. 6-9 Cryptographic techniques can further be employed to prevent the disclosure of sensitive data when a component is penetrated. 10
Such techniques generally must be coupled with intrusion detection capabilities to monitor for attacks, response capabilities to stem the attack once detected, and a recovery strategy to repair damage. Throughout these operations, the critical services provided by the system must remain available and correct.
Framing the Issues
This special section contains two articles that shed greater light on the issue of survivability. The first, "Survivability: Protecting Your Critical Systems," by Robert J. Ellison et al. at CERT, provides background on the concept of survivability and a glossary of terms, and outlines work being undertaken at CERT and elsewhere to address survivability in networked information systems. The second article, "Building Trustworthy Systems: Lessons from the PTN and Internet," by Fred B. Schneider et al., is based on a National Research Council report defining vulnerabilities in the United States' critical infrastructures and proposing a research agenda to address them. 11 It analyzes the vulnerabilities inherent in the existing Internet, using the telephone network as a point of comparison. The full report is recommended reading for technologists interested in this issue.
These articles focus on framing the issues that motivate research in survivable systems, rather than on deployed solutions for specific sub-problems. There is good reason for this slant, in that deployed survivable networked information systems are largely nonexistent outside the research community today. However, survivability has rapidly come to the forefront of the national research agenda. For example, the Defense Advanced Research Projects Agency is now funding a broad range of research in the construction of survivable distributed systems. I am optimistic that this research will result in more robust networked information systems to support the range of critical Internet applications that we are starting to see now and will see in the future.

References

Michael K. Reiter is department head of the Secure Systems Research Department in Bell Laboratories, Lucent Technologies. He is presently a principal investigator on a DARPA contract to build a highly scalable and survivable, distributed data storage architecture called Fleet. His research interests include all areas of computer and communications security, electronic commerce, and distributed computing. His works on system survivability include the Rampart system, which served as the foundation for AT&T's Omega cryptographic key management service, and the Phalanx system. Reiter received a BS degree in mathematical sciences from the University of North Carolina in 1989, and MS and PhD degrees in computer science from Cornell University in 1991 and 1993, respectively. During 1998-2000, he will serve as program chair of the flagship computer security conferences of both ACM and IEEE. He is a member of the Infosec Science and Technology Study Group, which was chartered by the Infosec Research Council to advise government agencies on funding priorities for computer security research.
38 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool