The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - November-December (2003 vol.1)
pp: 5-7
Published by the IEEE Computer Society
ABSTRACT
<p>Two hundred years ago, people could keep few, if any, secrets in their communities but were essentially anonymous a few towns away. Today, we scarcely know our neighbors, let alone their private affairs, but marketers halfway around the world can pull up credit reports and lifestyle preferences with a few keystrokes.</p>
Two hundred years ago, people could keep few, if any, secrets in their communities but were essentially anonymous a few towns away. Today, we scarcely know our neighbors, let alone their private affairs, but marketers halfway around the world can pull up credit reports and lifestyle preferences with a few keystrokes.
Moreover, we cannot see what marketers know about us, much less correct erroneous or outdated information, nor challenge derogatory information of dubious provenance. This leads to distrust and cynicism, attitudes that are corrosive and, ultimately, destructive.
Our expectations of privacy have changed over time, as have the technologies to support or compromise that privacy. It is important for us to steer technology and policy in directions that we prefer for ourselves and our children while recognizing that the landscape will continue to change. Those directions must be evaluated in terms of extremely elusive and time-varying costs (such as data collection and aggregation effort, loss of personal privacy and anonymity) and benefits (to national security, health care, and commerce, for example).
Ideally, the right technology should empower us as individuals to determine, implement, and enforce the privacy policies that we decide are best for ourselves. While the "no call," "no junk mail," and "no spam" approaches are steps in this direction, they do not prevent marketers from accessing and aggregating personal data in the first place—data that they can use in other undesirable ways, such as demographic redlining.
A true technical solution would let us as individuals control the whole data food chain, through collection, sharing, and use of our personal data. But what levels of effectiveness are in fact technologically possible and what corresponding regulatory measures are needed to achieve those levels?
A government's acquisition, aggregation, and use of its citizens' data make the situation even harder to evaluate. While the costs to citizens, in terms of both dollars and personal freedoms, have already been widely discussed and documented, the benefits are harder to characterize. Specifically, to what extent will mining and analysis of citizens' data improve a society's functioning and security? What degrees of privacy and anonymity must be sacrificed to achieve what real, increased levels of social performance and national security?
Such questions cannot have yes-no, true-false answers, but are a range of technological possibilities and social choices—possibilities and choices we do not presently understand. The worst response to such questions, in my opinion, is to ignore them outright and not pursue the means by which we could eventually provide informed answers.
Over 50 years ago, in the dawn of the atomic era, a somewhat similar situation existed. Nuclear technology held enormous potential for both great good and enormous destruction. The response then was to separate technology from policy, pursuing both aggressively, but making decisions about deployment in defense, energy, medical, and industrial applications contingent on the thoughtful synthesis of as many scientific and policy facts as possible.
Although we might not all agree on the decisions that have been made, our 50-year investment in the science and policy of nuclear technology has created an invaluable infrastructure of facts, people, and institutions that allowed the debates to be relatively open and informed.
Today, it is wholly appropriate to learn from that example and improve on it. We must invest significantly in both the science of information mining and the policy of personal privacy. In doing so, within a decade or two, we will have a solid infrastructure of people and scientific facts to combine technological means and social requirements to appropriately meet our individual expectations and national needs.
In particular, research on privacy technology and policy should be accelerated, not diminished. The recent creation of a US National Science Foundation Science and Technology Center on these subjects is appropriate, timely, and encouraging. It's heartening that the NSF has embraced the research challenge. Diversity is good, and it's sad that other agencies have been forced to avoid these research areas. The genie is out of the bottle. It's folly to think it can be put back in. It's irresponsible to think it does not exist.
Conclusion
The articles in this issue showcase a range of topics concerning privacy. "Measuring Anonymity: The Disclosure Attack," by Dakshi Agrawal and Dogan Kesdogan, demonstrates how to estimate fundamental limitations on the anonymity provided by an anonymity technique. Their future work will use these limits and implement an intelligent assistant to help users control and protect their privacy. Lorrie Faith Cranor's "P3P: Making Privacy Policies More Useful" looks at how the P3P standard can make Web site policies more accessible to users. Simson Garfinkel's "Email-Based Identification and Authentication: A Public Key Infrastructure Alternative?" shows how email addresses could replace public key infrastructure. Abdelmounaam Rezgui, Athman Bouguettaya, and Mohamed Y. Eltoweissy examine various Web privacy issues in "Privacy on the Web: Facts, Challenges, and Solutions." Finally, Jean-Marc Seigneur and Christian Damsgaard Jensen, in "Privacy Recovery with Disposable Email Addresses," discuss disposable email address services and how a new variation known as rolling email address protocols may be the key to privacy recovery.

George Cybenko is the editor in chief of IEEE Security & Privacy. He is the Dorothy and Walter Gramm Professor of Engineering at Dartmouth College. He has a PhD in electrical engineering and computer science from Princeton University. Contact him at gvc@dartmouth.edu.
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool