The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2006 vol.4)
pp: 15-19
Published by the IEEE Computer Society
James X. Dempsey , Center for Democracy and Technology
Ira Rubinstein , Microsoft
ABSTRACT
It is now widely recognized that technical design decisions about the Internet can have lasting impacts on public policy and individual rights. Stanford law professor Lawrence Lessig offered perhaps the most popular explanation of the "code is law" concept in Code and Other Laws of Cyberspace (Basic Books, 2000), but a rich literature exists on technology's impact on society and policy.




It is now widely recognized that technical design decisions about the Internet can have lasting impacts on public policy and individual rights. Stanford law professor Lawrence Lessig offered perhaps the most popular explanation of the "code is law" concept in Code and Other Laws of Cyberspace (Basic Books, 2000), but a rich literature exists on technology's impact on society and policy. The Internet and other information and communications technologies (ICT) are shaped by technical decisions by a range of bodies—from entities with broad perspectives such as the IETF and the W3C to traditional telecom standards organizations and entities focused on single applications. Although sometimes unintended and unforeseen, the policy implications of technical choices that arise from these and other organizations can be even more far-reaching than traditional public policy processes.
But if code is law, the opposite is equally true: legislative and regulatory decisions can impact technology design. Sometimes, technology companies and their public interest allies succeed in explaining to policymakers that government-dictated specifications will stifle innovation without serving intended public interest goals. The encryption key debate a decade ago is one example: technologists and civil liberties advocates persuasively argued that legally mandated key recovery schemes were bad for both security and privacy. In other cases, however, the "no technology mandates" principle has fallen on deaf ears. In the case of the broadcast flag, for example, some industry sectors pushed for, and others were willing to accept, a design mandate that would have required every computer and Internet-connected device capable of displaying video to follow copying restrictions embedded in the content. (A federal court struck that one down, but the debate over the relationship between copyright law and digital rights management technology is far from resolved.) And the US Federal Communications Commission (FCC) last year extended to the Internet a law requiring that communications networks be "wiretap friendly."
As Albert Gidari puts it in his article on surveillance design mandates, lawyers and engineers are joined at the hip. This isn't necessarily a bad thing, but the growing interdependence of policy and ICT means that engineers and lawyers must engage in dialogue. To do so, they must learn each other's languages.
The Internet and the technologies it supports are at a crucial juncture. On the one hand, spam, spyware, and identity theft threaten the trust that's crucial to both e-commerce and democratic participation online. The Internet's centrality to critical infrastructures makes its vulnerability a national security issue as well as a corporate and personal concern. Yet, responding to attacks and other harmful activity can itself raise liability questions. At the same time, major trends in location awareness, search, and storage are leaving personally identifiable information outside the protection of traditional privacy rules.
In his book, Active Liberty (Knopf, 2005), US Supreme Court Justice Stephen Breyer comments on the way in which technology has outpaced legal privacy protections and hence the need for a "national conversation" to occur in multiple settings—publications, meetings, legislative hearings, court cases—among the many constituencies—engineers, business people, elected officials, and ordinary citizens—whose lives the new technology is affecting.
In this issue
It's in that spirit that we conceived this special edition. It's time for a broad-based dialogue about the ways in which technology and policy interrelate. Engineers should be aware of legal principles, so that they can design products and services in ways that promote security, privacy, and user control. Legislators and regulators, meanwhile, should understand how the Internet's uniquely open and decentralized nature supports innovation in ways quite unlike traditional broadcast and telephony technologies.
We asked five tech-savvy lawyers to address different aspects of the relationship between law and technology. We hope their articles dispel some legal myths in the tech community, for if there is a risk in technologists' being unaware of their work's legal implications, there is the opposite risk of overinterpreting the law. On various occasions, we've heard systems administrators or engineers say "CALEA (Communications Assistance for Law Enforcement Act) requires X" or "the Patriot Act requires Y," when no such mandate actually exists. (The USA Patriot Act actually contains a section expressly stating that it doesn't impose any design obligations.)
Although the articles here focus on US law, the issues they examine, and the need for dialogue between policymakers and technologists, are global. In Europe, governments are requiring Internet service providers and others to retain traffic data for extended periods of time, even if retention of such data serves no technical or business purpose. China is going to great lengths to control network operations and technical standards under its view of its national interests. Technical decisions about location awareness and identity authentication services and other issues have global impacts. In one of the many signs of the globalization of standards processes, for example, the W3C recently opened an office in China.
The digital Fourth Amendment
Patricia Bellia, a former US Justice Department lawyer and coauthor of a leading electronic surveillance treatise, sets the stage by explaining how the Constitution's Fourth Amendment, which protects the right to privacy against unreasonable government searches and seizures, has been interpreted so far in the digital age. She describes in layperson's terms the basic legal framework developed by the courts and legislators. Although the Constitution's framers had no notion of the evolution of communications and information technology, they used some pretty broad terms: the Fourth Amendment protects "persons, houses, papers, and effects." Nearly 40 years ago, the US Supreme Court declared that the Fourth Amendment protects "people, not places," and even the current conservative Supreme Court has made it clear that it will not allow technological developments to erode core privacy principles.
Today, no one doubts that the Fourth Amendment's protections encompass digital materials. However, Bellia points out that the law governing the privacy of communications technologies is anything but straightforward. Congress last addressed the issue in 1986, in the landmark Electronic Communications Privacy Act (ECPA). Since then, further technological developments have strained the statutory categories that Congress established, and courts have struggled to apply the provisions in circumstances Congress simply couldn't have foreseen. As Bellia explains, these laws distinguish among three different types of government conduct: first, the real-time acquisition of electronic communications as they're transmitted; second, the acquisition of electronic communications from a service provider storing them on a subscriber's behalf; and third, the acquisition of "transactional" data such as information on a communication's source or destination. Of course, these distinctions may be blurring. As Bellia points out, the current laws are a mixture of technology-neutral principles and technology-specific distinctions that probably no longer correspond to user expectations.
Bellia's article clearly distinguishes between what is settled and what is uncertain. (Too often, law discussions take one of two extreme positions: everything is determined or everything is contingent. In fact, some clear answers exist, although not as many as we might wish. The trick is figuring out which is which.) Bellia points out that some of the difficulties in applying the surveillance law framework to electronic communications are overstated, but she concludes by highlighting areas urgently in need of congressional attention—not only because the statutory framework is unworkable at present but also because emerging technological developments threaten to put further pressure on that framework.
Designing wiretap-friendly networks
Arguably the most direct legal intervention to date into communications technology's design has been CALEA. Albert Gidari, a Seattle lawyer who represents many communications companies, explains that CALEA was passed in 1994 by a Congress concerned that developments in telephony were making it harder to carry out wiretaps and other forms of electronic surveillance. Rather than decipher all the law's provisions, however, Gidari focuses on the important lessons CALEA offers about standards processes. CALEA requires all telecommunications carriers to ensure that their networks have certain interception capabilities baked-in. However, as Gidari notes, the statute doesn't tell manufacturers or service providers how to meet the capability requirements; rather, individual entities can decide how to comply with the law, either ad hoc or through standards-setting organizations. Although the absence of standards is no excuse to avoid CALEA compliance, the act created a "safe harbor" for manufacturers or service providers whose equipment, facilities, or services are in compliance with publicly available technical requirements or standards adopted by an industry association or standards-setting organization.
Gidari argues that CALEA's deference to standards processes was in fact a major defect in the act. Law enforcement agencies—particularly the FBI—came into the standards process with a long list of desired capabilities. After industry acceded to some and rejected others, the US Justice Department petitioned the FCC to amend the standard to provide all of the additional capabilities. After lengthy proceedings and a trip to federal court, the FCC granted most of law enforcement's requested enhancements, including capabilities that were never available to the government in plain old telephone networks.
That same process is now beginning with respect to voice over IP (VoIP) and broadband Internet access. The courts will soon rule (if they haven't already by the time you read this) on whether CALEA covers these Internet services, as the government contends that it does. Regardless of what the courts decide in this case, current law states that law enforcement agencies may obtain court orders requiring any service provider to provide technical assistance to wiretap any kind of electronic communications, including those based on TCP/IP.
From CALEA's complex history, Gidari draws some clear lessons. First, he urges Congress to be crystal clear about what is or isn't required when it passes technology mandates. He further concludes that standards-setting bodies are ill-suited to resolving policy disputes. CALEA, for example, doesn't define certain key terms. Leaving engineers to guess what Congress meant by "call-identifying information" is unlikely to yield a standard that actually meets either the law or law enforcement needs. Finally, Gidari calls it a myth, albeit one now enshrined in law, that standards ever really precede the development and deployment of new services or capabilities. More often than not, standards follow innovation and market acceptance, so there's no real "industry" standard until a critical mass of industry participants is willing to share information to create one. In CALEA's case, surveillance standards development was divorced from the standardization of the underlying service itself. Gidari ends on a pessimistic note, saying that "the standards setting process for surveillance under CALEA is permanently broken."
Legal specs for filtering software
Erin Egan, a leader in technology law, offers a more optimistic view of the relationship between law and policy. She starts by outlining a daunting list of legal claims that the makers of antispyware and filtering technologies could face: libel and defamation, tortious interference, false advertising, unfair competition, deceptive trade practice, criminal laws, invasion of privacy, and antitrust. The list could easily put the fear of litigation into any engineer's heart. But, fortunately, Egan provides a set of precautions for developers and users to take to protect themselves from litigation. Moreover, these precautions are relatively simple. In general, they're common-sense steps in promotion, design, functionality, and user interface:

    • Don't hype your product.

    • Objectively describe malware using functional terms rather than conclusory labels, such as "spyware."

    • Provide clear notice about the scope and operation of the product.

    • Encourage user control.

    • Afford redress.

    • Foster interoperability.

    • Act in good faith.

Taking these small measures up front will not only protect consumers without affecting Internet safety products' quality but also help avoid legal pitfalls years down the road.
One striking aspect of Egan's recommendations is that many of them are consistent with consumer interests. Her article is thus a welcome antidote to the kind of lawyering practiced by the attorneys who drafted the unintelligible "privacy notices" we receive from banks, which seem intended to confuse and confound consumers. Taking a very different approach, she advises security and safe-surfing tool designers and marketers that the best way to avoid legal liability is to work with consumers, enhance user control, notice, transparency, and choice. For example, because the ECPA provides a consent-based exception to its prohibition on email interception, an entity that develops or implements a spam filter can help protect itself against law-suits brought by email senders by obtaining authorization from end users to scan and block email based on articulated criteria. Another pro-consumer recommendation that has widespread applicability is to minimize the collection and use of personally identifiable information. If you don't collect it in the first place, you can't be accused of misusing it, and you aren't responsible for ensuring its security.
Egan points out that the risk of successful libel claims should diminish if a reasonable basis exists for using particular classification criteria in a filtering decision (such as the fact that a sending IP address doesn't match its purported domain). The Anti-Spyware Coalition's work, coordinated by the Center for Democracy and Technology, provides one source of objective definitions and criteria. In addition, Egan notes, because Internet safety products will inevitably yield false positives, developers and vendors should design processes that let entities that believe they've been adversely impacted by such products report and resolve complaints. Thus, she recommends, spam filters should report back to the sender of a captured email not only the reason for the capture but also the means by which the sender can complain about any message it believes to have been wrongly classified.
Defensive rights of Internet service providers
Charles Curran has spent many years in the trenches protecting consumers and other Internet users from unwanted online behavior—most recently as an assistant counsel at America Online. His article reviews the legal bases and practical justifications for antispam and antispyware technologies as used by ISPs and email service providers, as well as possible challenges to such technologies by mail senders and software makers excluded from consumers' computers. To begin with, as Curran points out, judicial decisions (and the ECPA) give great weight to network operators' proprietary rights, allowing them broad (although not unlimited) discretion in preventing damage to their properties—specifically, the networks of servers used to furnish email delivery or other Internet services. A corollary of the right to defend its proprietary network is an operator's right to exclude unlawful activity from its network. However, not all unwanted online activity is illegal. Authorized use policies, directed primarily to actors outside the email network, could bridge the potential legal gap by notifying senders of the network operator's specific antispam policy. Consumer consent, obtained through terms-of-service policies, might also establish legal authority to operate a spam-filtering or blocking system.
In terms of spam, spyware, and other online conduct falling into a "questionable" category, Curran echoes Egan's advice—filtering-system operators can look to independent mechanisms, including end-user feedback and third-party assessments, to establish that their filtering policies are neither capricious nor driven by the operator's own commercial interests. Although Egan's recommendations were primarily aimed at makers of filtering software, Curran's tips from the ISP perspective are strikingly consistent: be transparent by prominently disclosing information about the underlying classification system and policies used to administer it; apply the technology even-handedly; involve the end user in the decision-making process; where possible, supplement the decision-making process with reliable third-party information; build on broad-based frameworks like that of the Anti-Spyware Coalition; and provide parties affected by the technology a meaningful redress mechanism.
Offensive self-help not advisable
Rounding out our discussion of legal responses to harmful or unwanted online conduct, Gregory Schaffer provides an excellent counterpart to Curran's piece, as he explores the limits to service provider discretion. Schaffer focuses on botnets, the well-known networks of compromised computers controlled by "herders" that engage in nefarious conduct ranging from installing unwanted adware to launching debilitating distributed denial-of-service attacks. When faced with an anonymous attacker with seemingly unlimited resources and a relentless desire to cause harm, Schaffer notes, systems administrators have a natural tendency to want to fight fire with fire. However, despite their potential effectiveness in stopping immediate harm to attacked systems, many self-help solutions are ineffective in terms of actually impacting the malicious attacker. Moreover, the vast majority of proposed responses involve violating the same laws (and Internet norms) as the botnet herders. Although attempts at direct counterattacks have a certain visceral appeal, Schaffer concludes that there are better (and far less risky) responses.
To fight back, Schaffer recommends that enterprises be prepared in advance by adopting appropriate policies and vigilantly applying standard practices for implementing and maintaining a defense-in-depth computer security strategy. He also advocates building strong relationships with key partners and service providers. Contracts with service providers and major customers should require all parties to explicitly identify security points of contact and establish service-level expectations for response times and various types of assistance when either party experiences an emergency event. Building trust-based relationships with law enforcement authorities is also critical so that you'll have the courage to call them when things really start to go wrong.
Conclusion
Over the years, several leading technologists, such as Vint Cerf, John Gage, Craig Mundie, and Whit Diffie, have had the wisdom and patience to engage with policy makers to explain the implications of the digital revolution and the open, decentralized, user-controlled, and innovative technologies it has produced. As lawyers, we owe it to the tech community to explain the legal framework that in turn shapes technology. We hope these articles are the beginning of an ongoing dialogue.
James X. Dempsey is the policy director at the Center for Democracy and Technology. His areas of focus include government surveillance, information sharing for national security, and the international legal framework for Internet development. Dempsey has a JD from Harvard Law School. He has testified on numerous occasions before Congressional committees on issues at the intersection of privacy, electronic surveillance, and national security. Contact him at jdempsey@cdt.org.
Ira Rubinstein is an associate general counsel at Microsoft and heads Microsoft's Regulatory Affairs and Public Policy Group in Legal and Corporate Affairs, where his responsibilities include privacy, security, Internet safety, export controls, and telecommunications policy. He lectures frequently, and has testified many times before Congress on these topics. Rubinstein has a JD from the Yale Law School. Contact him at irar@microsoft.com.
783 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool