Pages: p. 5
In 1970, the late Per Brinch Hansen wrote a seminal article ("The Nucleus of a Multiprogramming System") that articulated and justified what today we call policy/mechanism separation. He introduced the concept in the context of an operating system's design at a time when experts felt we lacked a clear understanding of what the ultimate shape of operating systems would be. The concept, like other powerful memes, was so compelling that it took on a life of its own and is now an article of faith in CS education—taught without reference to the original context.
The idea isn't original to computer science—it has existed for thousands of years. In martial terms, it's reflected in the popular paraphrase from Alfred Lord Tennyson's poem, Charge of the Light Brigade: "ours is not to reason why; ours is but to do and die." Separation of policy and mechanism has become an article of faith in our field because it's so powerful at helping us distinguish between the things that we can decide now and the things that we might need to change later. The key is identifying the knobs and dials of potential policy choices during system design and implementing a flexible way to allow those choices to be bound late.
In this issue of IEEE Security & Privacy, the article "Risking Communications Security: Potential Hazards of the Protect America Act" (p. 24) explores some of the hazards associated with a blind application of this principle to large infrastructures such as the Internet and the telephone system. Although this analysis is conducted in terms of a specific US law, it raises universal questions faced by free societies when considering the tension between individual privacy rights and the collective "right" to security.
What we see at play here are three large policy objectives in conflict: first, allowing the security establishment to scrutinize communications they legitimately believe would-be terrorists could use; second, protecting innocent people's privacy using communications networks; and third, safeguarding the integrity of the communications systems so that their continuing operational integrity isn't jeopardized. The article cites several recent examples from around the world in which the introduction of ill-considered monitoring systems has led to disastrous unintended consequences.
To date, the public debate on the Protect America Act has focused on the first two issues: the trade-off between security and privacy. Whether a piece of legislation drafted in haste by one of the most divided US congresses in history could have found a wise balance will be learned only retrospectively. What the authors of "Risking Communications Security" clearly demonstrate is that the third policy objective certainly isn't addressed in the law.
But that's okay, you might be tempted to say: policy should be set independently from how it is implemented. As Tennyson illustrates, it's a good theory—but in practice, the unintended consequences can be shocking. One of the preconditions to doing a good job of separating policy from mechanism is that the knobs and dials offered to policy writers can be implemented. As the article's authors observe, the law assumes that communications networks can deliver high-quality information about the geographic location of end points. This isn't easy and might not be possible for many modes of communication, particularly for voice over IP (VoIP), cell phones, and WiFi connections.
Leaving issues to be addressed later, as this law does, is a time-honored tradition in legislation. The Protect America Act expires in February 2008, so there will be at least one chance to renegotiate the terms. This means that this specific act won't affect capital spending by infrastructure builders in the US until after the rewrite. It gives interested people in the US and elsewhere a chance to mitigate the systemic risk by influencing the next rewrite, assuming that a more rational political conversation emerges in the US Congress sometime in the next few years. This article should be a key input to that conversation.