, University of Maryland
, IBM T.J. Watson Research Center
Pages: pp. 40-41
Abstract—The proliferation of embedded devices is bringing security and privacy issues to the fore. We must ensure that we have learned from past problems and proactively attempt to prevent them in the future.
Embedded systems are quickly becoming ubiquitous within our daily lives. They come in many different shapes—ranging from personal digital assistants to disk controllers and home thermostats to microwave regulators. The key trend, however, is that all such devices are becoming more powerful, autonomous, and highly connected—following essentially the same growth curve as the Internet. In short, embedded systems could very likely have the same economic and social impact as the Internet and its undying thirst for bandwidth.
Because the dynamics and market forces are similar, we might suppose that the problems with security and privacy are similar as well: Ideally, we can learn from our past successes and failures and apply these lessons within the embedded space. Unfortunately, implementing security in embedded systems is dramatically different than for full-featured, general-purpose computers.
Even with today's advanced technology, embedded systems typically have severely limited resources:
In essence, the capabilities of embedded systems are approximately 10 to 15 years behind the general-purpose market. Yet we still expect these systems to provide today's security levels—not those of a decade ago.
Given the size and computational requirements of popular security protocols (SSL and SSH) and encryption algorithms (RSA and triple-DES), how to accomplish this task is unclear.
Efforts such as the Advanced Encryption Standard (AES) from the National Institute of Standards move in the right direction as the requirements for the AES algorithm included several issues helpful to embedded systems. However, AES does not solve the entire set of problems.
As a result, many people decide to "roll their own" protocols and algorithms—a dangerous proposition because the approach lacks significant peer review. While it can be done, doing it right is difficult. We still must do a great deal of research to provide robust security protocols supporting embedded systems.
A major lesson from the Internet could be called—with apologies to Sir Isaac Newton—the third law of Internet security: Every security technology has an equal and opposite use. For instance, digital rights management (DRM) provides content owners the ability to protect and control their investment, but when done improperly (as it usually is), DRM can erode First Amendment rights as seen in Felten v. RIAA ( http://www. eff.org/sc/felten). In many cases, DRM eliminates the long-standing "fair use" doctrine that lets individuals make back-up copies and use content as they please. In fact, Senator Fritz Hollings plans to introduce new federal legislation entitled the Security Systems Standards and Certification Act that, on the basis of its name alone, appears promising. But examining the legislation's actual content paints an entirely different picture: The proposed act requires all products processing digital media to implement "certified" security mechanisms.
Including security in information technology is usually a good thing, but do we really want to impose civil penalties for creating technology that "does not include and utilize certified security technologies"? Do we want to see security used as a fulcrum to institute bad laws and potentially further erode individual privacy?
The feature articles in this special issue explore two big issues in the security and privacy space as they relate to both consumers and providers of digital content and financial services: their privacy and their pocketbooks.
Can we accomplish digital rights management without eroding long-standing individual rights? To highlight this aspect of the DRM problem, we offer the first public presentation of proposed technology providing DRM capabilities for recordable media. C. Brendan S. Traw's "Protecting Digital Content within the Home" presents a technical description of the mechanisms designed as part of the Content Protection for Recordable Media (CPRM) system. As a counterpoint, we also present Dan S. Wallach's short article, "Copy Protection Technology Is Doomed."
Another major Internet lesson is that ease of use and security are usually at odds with each other. To be successful, embedded systems—by virtue of their ubiquity—must remain transparent to users. Peter Bergstrom and colleagues' "Making Home Automation Communications Secure" describes how Honeywell is connecting home automation systems to the Internet so that users can control their home while away. In addition, this article also illustrates the complex design trade-offs that developers must make to provide encryption in the resource-poor embedded space.
In "Building the IBM 4758 Secure Coprocessor," Joan G. Dyer and colleagues present a design retrospective of IBM's 4758 physically secure coprocessor for protecting both data and computation in potentially hostile environments. In addition to providing physical protection, their design goals encompassed the equally challenging problems of securely downloading applications into the secure environment and remotely identifying and authenticating the embedded device. The IBM 4758 was the first device to obtain a FIPS 140-1 Level 4 validation, the highest level of commercial cryptographic certification currently available.
Many people incorrectly view security in isolation. A single security mechanism or a single certification cannot provide adequate security. Instead, we must view security holistically, taking the overall composition of the security mechanisms and processes into consideration. This is what makes providing security an extremely difficult task.
Bringing this point home, Mike Bond and Ross Anderson's "API-Level Attacks on Embedded Systems" describes protocol flaws in the IBM 4758 secure coprocessor. These flaws make it possible to extract application secrets without actually opening the tightly sealed, FIPS-certified device—demonstrating that a certified, physically secure device is not a security panacea. (In all fairness to the FIPS validation process, the application that revealed its keys was not certified.) The article also points to an interesting avenue of attacks that exploit fundamental design flaws—the mathematical properties of protocol operations—instead of the protocol implementation flaws that code-injection attacks exploit (for example, buffer overflows).
Most people interact with an embedded system at some point during their day without knowing it. The embedded system may be in their automobile, refrigerator, or cellular phone. These devices' continuing proliferation and interconnectivity will bring security and privacy issues to the fore. Although they won't be identical to those raised in the Internet space, these issues will have the same fundamental basis. As such, we must ensure that we have learned from past problems and proactively attempt to prevent them in the future.