About 540 million years ago, a vast multitude of species suddenly appeared in the fossil record, along with major diversification of organisms. Paleontologists call this event the “Cambrian explosion.” We are currently witnessing the computing world’s version of the Cambrian explosion. In the past decade, advances in sensing capabilities, screen technologies, solid state storage, and wireless networking have enabled the cheap manufacture of computers in all kinds of form factors.
Rather than just large boxes sitting underneath our desks, today’s computers come as smartphones, tablets, glasses, cars, watches, fitness trackers, tables, physical games, and so on. Designers, artists, scientists, and engineers are all looking at new ways of weaving computation, communication, and sensing into our everyday lives.
Challenges and Risks
On the one hand, these pervasive computing technologies offer tremendous potential benefits, for both individuals and society, in terms of healthcare, sustainability, transportation, and more. Yet, these same technologies pose significant new risks such as accidental disclosures of personal information, overly intrusive advertising, undesired social obligations, potential for embarrassment, and a general sense of lack of freedom and control. This range of concerns is typically expressed under the umbrella of privacy (Daniel Solove’s “Taxonomy of Privacy” offers a good introduction to the subject’s many facets).
As we move toward truly pervasive computing, designing for privacy becomes increasingly important (see, for example, Marc’s 2001 paper on “Privacy by Design: Principles of Privacy-Aware Ubiquitous Systems,” which was awarded the Ten Year Impact Award at Ubicomp 2011). From an ethical standpoint, people should be treated with respect and as individuals with autonomy and choice (see the December 2007 special issue on Ethics in Ubicomp of the International Review of Ethics, for a good discussion). From a legal perspective, systems might need to be designed to comply with existing rules and regulations — a particularly challenging requirement in light of current legal developments, such as the European Privacy Directive’s ongoing reform. From a pragmatic and business perspective, people may choose to reject systems that they feel are too intrusive, as exemplified by the highly publicized user drain that WhatsApp experiencedafter Facebook acquired it.
However, designing for privacy can be quite difficult in practice. For one thing, pervasive computing technologies break our everyday conception of space and time, making it easy to intentionally, accidentally, or even maliciously share things that were done in one context with people in completely different contexts (see, for example, Leysia Palen and Paul Dourish’s “Unpacking Privacy for a Networked World“).
A recent example of this phenomenon is the area of ambient assisted living, which aims to help the elderly continue living independently while still receiving emergency care should the “smart home” detect a life-threatening situation (such as a heart attack or a fall). Clearly, criminals might want access to such data to plan break-ins, and insurance companies might want this information to catch fraudulent claims. But even trusted family members might accidentally receive access to recordings of private moments that were never intended to be shared.
Pervasive computing technologies can also clash with social norms, thus causing friction. A prime example is Google Glass, the wearable device that not only accepts voice commands to present its owners with information in a convenient heads-up display but also supports the hands-free taking of pictures and videos (see Jason’s 2013 article on “Considering Privacy Issues in the Context of Google Glass“). Although there’s potential value from people using Google Glass, the vast majority of news stories surrounding the product to date have focused on the associated privacy concerns. Whereas, Google hails people wearing Glass as “Explorers,” public discourse is increasingly calling them “glassholes.” Indeed, some bars and restaurants have even started to ban patrons from wearing Glass, and several Explorers have reported being physically abused because they were wearing them.
A third challenge for privacy is sustainable revenue models. Many developers offer smartphone apps for free, for example, and use advertising to generate money. However, going down this route provides a strong incentive for a company to collect more and more information about its users to better target advertisements. As we’re already seeing with online behavioral advertising, this approach can yield higher revenues, but it also surprises many people and makes them feel they’re being tracked. Online social networks such as Facebook and Google Plus are frequently under public scrutiny for their use of user data (see, for example, “Inside Facebook’s Brilliant Plan to Hog Your Data,” Computerworld, May 2014). Note that this goes well beyond having a feeling of unease, as Cory Doctorow succinctly puts it in his 2008 article in the Guardian on why “Personal Data is as Hot as Nuclear Waste.” Even if you trust the original data collector, once such data is disclosed to others — perhaps through a corporate takeover or an involuntary data spill — it’s impossible to “get it back.” Although a recent ruling by the European Court of Justice may have asserted a “right to be forgotten,” implementing this will be far from trivial.
The research community is actively investigating three major themes in this arena. One such theme is location privacy, which includes obscuring one’s trajectory, developing system architectures for anonymizing individuals or minimizing information leakage for location-based services, and evaluating algorithms for blurring or deleting location data so that it can be safely shared with others (see John Krumm’s 2009 Survey of Computational Location Privacy for a good overview).
This month’s CN theme begins with Reza Shokri and his colleagues’ recent IEEE Transactions on Dependable Computing article, “Hiding in the Mobile Crowd: Location Privacy through Collaboration,” which looks at how collaboration among mobile nodes can prevent user tracking. Next, Russell Paulet and his colleagues suggest a set of cryptographic protocols to hide a user’s interest in a particular region (thus giving away their current location) in “Privacy-Preserving and Content-Protecting Location Based Queries,” from IEEE Transactions on Knowledge and Data Engineering.
A second major research theme is smartphone privacy, particularly looking at what kinds of personal information smartphone apps are using. Landon P. Cox offers a nice introduction to the challenge of trusting such apps in “Usefulness Is Not Trustworthiness,” from IEEE Internet Computing. We follow on with “Privacy Management for Mobile Platforms — A Review of Concepts and Approaches,” by Christoph Stach and Bernhard Mitschang. The authors offer a review of current concepts and approaches to securing personal data on a smartphone, as well as suggest their own approach, which they call the privacy management platform (PMP).
A third theme lies at the intersection of the first two, specifically in pervasive sensing applications (see Delphine Christin and colleagues’ survey on privacy in mobile participatory sensing applications for an overview). Recent work by Ioannis Krontiris and Tassos Dimitriou on “Privacy-Respecting Discovery of Data Providers In Crowd-Sensing Applications” suggests the use of cloud-based agents could shield their owners’ location data from others while organizing efficient area queries. Finally, Qhingua Li, Guohong Cao, and Thomas La Porta propose using homomorphic encryption techniques to support both Sum and Min queries over aggregated data in “Efficient and Privacy-Aware Data Aggregation in Mobile Sensing,” from IEEE Transactions on Dependable and Secure Computing.
The articles in this month’s theme offer a sampling of major topics in privacy for pervasive computing environments. However, privacy encompasses a wide spectrum of topics, including ethics, law, social norms, system architectures, algorithms, and user interfaces. We encourage you to take a broad perspective on this topic and join the ongoing debates about how we can design for this thorny issue.
We leave you with a parting thought: How can we create a connected world that we would all want to live in?
J Hong and M Langheinrich, “Privacy Challenges in Pervasive Computing,” Computing Now, vol. 7, no. 6, June 2014, IEEE Computer Society [online]; http://www.computer.org/publications/tech-news/computing-now/privacy-challenges-in-pervasive-computing.
Jason Hong is an associate professor in the Human-Computer Interaction Institute at Carnegie Mellon University. His research is in mobile and ubiquitous computing, human-computer interaction, privacy, and security. Hong is an Alfred P. Sloan Foundation Fellow, a Kavli Fellow, and a PopTech Science fellow. Contact him at firstname.lastname@example.org.
Marc Langheinrich is an associate professor in the faculty of Informatics at the Università della Svizzera Italiana (USI) in Lugano, Switzerland. His research is in public displays, community informatics, and usable privacy and security. He is an associate editor in chief for IEEE Pervasive Computing and a member of the steering committee of the UbiComp conference series. Contact him at email@example.com.