The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - October-December (2007 vol.6)
pp: 2-4
Published by the IEEE Computer Society
Roy Want , Intel
ABSTRACT
Ensuring both pervasive security and ease of use is a challenge for our research community. Pervasive privacy will be even more difficult to achieve.
Security and privacy are hot topics to consider when designing pervasive computing systems. Hot is the operative word, because if you compromise security or privacy, you'll likely upset a lot of people, and a heated discussion will ensue. I doubt many people would disagree with this observation. However, in practice, I've found that attention to security as well as interpretations of privacy vary a great deal.
Security as an Afterthought
When designing any kind of computer system, it's common sense to consider security. However, many pervasive computing systems built under the guise of research don't start with a good security story. I've helped design and implement several such systems, so let me shed some light on the mindset.
A research project usually sets out to enable something that wasn't possible before. Designing such a system can be difficult, but the project's vision motivates the people involved and they're excited about actually using the resulting technology.
However, when it comes to security, we start thinking about ways to stop people from using this new capability. We create barriers that will stop the "bad guys" and, unfortunately, the "good guys" too when they forget their credentials. Even when a legitimate user has the correct credentials, security slows them down, requiring a password or similar authentication process. Security thus creates a negative mindset, and, for many of us, it's not why we joined the pervasive computing business.
Clearly, some researchers have staked their careers on pervasive computing security, and their important design contributions will help protect complex distributed systems. I don't wish to do anybody a disservice here; however, their motivation usually differs from that of the accompanying system builders. Consequently, we often don't architect secure systems, so we find ourselves adding security features only after we've accomplished the system's main goal.
Privacy and Context
The work I am best known for from the '90s is the Active Badge project, which set out to find a way to automatically route telephone calls to the correct place in a building. To a new generation of researchers, this probably seems like a no-brainer; just buy everybody a cell phone!
However, at the time, there were no cell phones, and business phones were almost exclusively based on a Private Branch eXchange service (which many organizations still use). I wanted to automate the process of call-forwarding from an employee's default extension to the extension closest to the person's location. The solution I came up with was to have everybody wear an electronic badge that periodically beaconed a unique infrared signal. A network of low-cost infrared receivers distributed throughout the building would then record the signal, and a central server could collect all the data. A simple network service would let clients enter a name and look up the corresponding badge ID to determine the station where it was last sighted, along with the corresponding room and nearest extension.
As soon as we had built the system, we realized it was part of a far bigger pervasive computing story—thus the notion of context-aware computing was born. As you might expect, when shown publicly, the privacy issue was the main discussion point, inspiring a host of press articles with sensational titles such as "The Boss That Never Blinks" ( San Jose Mercury News, West Magazine, 8. Mar. 1992) and "Orwellian Dream Come True: A Badge That Pinpoints You" ( The New York Times, 12 Sept. 1992). Furthermore, all reporters inevitably askedif we had sensors in the bathrooms and almost seemed disappointed when we told them we didn't.
Despite the external jibes at this location capability, the majority of my colleagues weren't deterred from wanting—and proudly wearing—the badges. On the whole, they viewed the project as breaking new ground and embracing the ubicomp vision. Displaying a badge meant you were "in" because ubicomp was "in." The system was certainly useful, but I'm not sure it would have been as successful without the implication that you were also helping to build the ubicomp vision. After all, it contributed to a loss of personal privacy in the office, and individuals might not have considered the value-to-cost trade-off to be worth it. It's hard to know without a control experiment.
The lesson I learned is that our interpretation of right-to-privacy in the context of a new technology is very variable. What makes technology a good or bad thing is dramatically affected by the social setting in which it is used. In other words, there's no absolute standard for privacy that we can record in a rule book and follow when designing something new.
Issues for Pervasive Computing
In a world that has embraced pervasive computing, everyone will carry mobile devices, and the surroundings will contain a rich network of computer systems that communicate with each other and share resources to support user tasks. While this vision is still some way off, we're getting closer each day. We've already moved from a static computation environment centered on work and home to a more mobile one.
Ensuring both pervasive security and ease of use is a challenge for our research community. One mitigating factor is that smaller mobile devices now have the potential to let us carry a personal, trusted computing device at all times, and we can use it to overcome the lack of trust we might have in our surroundings. The assumption, of course, is that our mobile computers haven't been compromised.
Fortunately, considerable effort is going into the design of mobile systems based on Trusted Processing Modules (TPM). These devices ensure a computer boots from a valid code image and can create a chain of trust from a root encryption key. TPMs can also authenticate remote parties and validate and decrypt content—all while maintaining the secrecy of their keys. Only a complex physical attack on such a device is likely to reveal its contents, so maintaining possession of the device offers a reasonable guarantee of security.
Despite progress in this regard, physical security and secure protocols are only one part of the story. Social attacks that trick people into revealing credentials when they shouldn't are more difficult to guard against and present an ongoing challenge.
Pervasive privacy will be even more difficult to achieve. The more I learn about the subject, the harder it is for me to believe we can make effective progress. The main problem is that everything in the real world is unique, so a skillful observation can reveal a signature that you can trace back to the owner. For example, consider a mobile device with cellular communication and the ability to use strong encryption with rotating media access control addresses to hide identity. Despite considerable effort to ensure that privacy is protected by the protocol, a radio frequency expert could carefully analyze the RF signature and find characteristics in the baseband signal that would be unique to the device (probably traced back to imperfections in the transistors or the crystal oscillator at the heart of the system). As the device's owner will probably go home at some point, you could trace the signal to a house address and use a simple Web search to determine a name. You could then link the owner to sightings at other locations, determine the time they were there, and determine the transactions that occurred.
Conclusion
Thus both security and privacy will continue to hold many special challenges for computer systems, which are amplified further when used to support the goals of pervasive computing. It will be interesting to see what guarantees, if any, we can make for the future users of these systems.
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool