Toward a Science of Security

Guest Editor's Introduction • Munindar P. Singh • January 2013

Translated by Osvaldo Perez and Tiejun Huan
International readers, read this article
in Spanish • in Chinese

"We have met the enemy and he is us."
—Walt Kelly, Pogo

lock on computer chipOver the past few decades, security research has garnered increasing attention and funding. Despite much effort, however, current security practice conveys an ad hoc flavor — find a bug; patch it; find the next bug; and so on. This methodology is sometimes termed engineering, though only in the narrow sense of developing solutions to specific problems.

In contrast to this approach, the past few years have seen a growing push within the research community to develop a science of security. Leading funding agencies, such as the US National Science Foundation and the US Department of Defense, have initiated research programs specifically promoting the study of security as a science. The motivation behind these programs is to develop a systematic body of knowledge with strong theoretical and empirical underpinnings to inform the engineering of secure information systems that can resist not only known but also unanticipated attacks. A compelling vision is to seek metrics — for example, describing how secure a system is in what kinds of situations under what kinds of threat.

Part of the challenge lies in the fact that computing is not a natural science — a point that seems to lead to much angst and soul searching among computer scientists. Years ago, Herb Simon made the key observation that computing is a science of the artificial. As such, it needs not only principles but also an approach to systematizing knowledge through empirical investigation, however much they might differ from those in, say, physics or biology. Rather than making predictions about the natural world, we would be making claims about IT representations and architectures, and the organizations in which they were realized.

Open Systems

Security differs from computing at large in two key ways. First, it is inherently a human endeavor: it concerns humans and we are its active players. The recognition that humans are active in security is leading to approaches that apply insights from psychology to understand how people conceptualize private information, why they are susceptible to certain kinds of attacks, and how we might help them deal with threats given the limitations of attention and cognition.

Security also fundamentally presupposes an open system. If it were possible to perfectly circumscribe a system, no security challenges would exist beyond ensuring its correctness or integrity. After all, every intrusion involves the violation of some assumption. The system's open nature means that the participants and their actions are not known ahead of time. However, computing as a discipline carries a strong prejudice toward dealing with closed systems. Indeed, the idea of a well-circumscribed "system" is entrenched in our language, and all too often we talk of "the system" as a box we can kick. We imagine users as sitting outside such a system and interacting with it.

Norms: A Vision

The foregoing leads me to advocate the idea of a normative account of systems and security as a basis for the science we seek. Specifically, when we think about systems in the broader sense, we should think of users and malefactors alike as being part of the system. That is, a system's security lies not at its perimeter but in its very core. A system thus corresponds to a society, whether the entire human society or, more often, a suitable microcosm. A security property is a norm in this system-as-a-society, and a security violation is a violation of some norm.

It is important, representationally, and more so from a security standpoint, that these norms not be general conditions indicating that something good happens (liveness) or that nothing bad happens (safety). Such traditional construals make sense when talking about a unitary system owned by one party and operated from that party's perspective, but when we shift attention to open systems, general constraints make less sense: what is good or bad depends on whom you ask. Moreover, we must ground the norms in a notion of accountability so that when they are violated, we know who did so.

Understanding such norms will prove crucial to articulating a science of security. The properties of interest should be proven via assumptions and guarantees regarding such norms. We might further quantify these norms' prospective success and failure to produce metrics of interest.

A Word of Caution

Well-defined concepts are a necessary element of science, albeit a necessarily slow part to develop. Experiments and observations all rely on the presence of the appropriate concepts. For example, today it makes sense for us to measure mass and momentum. Yet, these concepts were far from clear to medieval scholars and even to early scientists such as Galileo. Galileo's predecessors talked about impetus, which — though its reflection can be seen in the modern concepts of mass, momentum, and kinetic energy — doesn't exist as a technical concept any more. I can't help but think that the science of security is still at its pre-Galileian stage. We should state and refine our hypotheses by all means and conduct measurements where we can, but we should remember that what we're measuring might prove as crucial to the science of security as impetus has been to modern physics.

This Month's Theme

Security is a huge area, and the science of it has many components. Reflecting my own preference for human and normative concerns, I've chosen the following works as representative of the directions I expect the field to take.

In "On Adversary Models and Compositional Security," Anupam Datta and his colleagues address the challenge of determining whether a system satisfies certain kinds of security properties, namely, safety properties, which guarantee that nothing bad happens on any computation. They model a system as having two kinds of components: those that are (correctly) trusted and those that are adversaries. The adversaries may invoke resource interfaces in arbitrary orders, whereas trusted components do so only appropriately.

Recall that a captcha is a small challenge problem placed on a website for users to solve in order to proceed to some step such as creating an account. The idea is that the problem (identifying distorted letters, for example) would be easy for a human but difficult for a computer, thereby providing a defense against attackers who would use a machine to, for example, generate false accounts. In "How Good Are Humans at Solving Captchas?" Elie Bursztein and his colleagues evaluate the effectiveness of several commonly used captchas, finding that they're often harder for humans than their designers might have intended.

Central to the idea of dealing with an open system is the idea of an identity. Elisa Bertino's "Trusted Identities in Cyberspace" provides a short survey of current views on digital identity and how they're created and shared.

"Obligations in Risk-Aware Access Control" takes a normative approach as Liang Chen and his colleagues introduce an approach that incorporates risk assessments in decision-making. In particular, their approach supports policy violations when necessary as long as a responsible party takes on an obligation to clean up after the fact. For example, a nurse might be authorized to release a drug in an emergency when a physician was unavailable to decide as long as the nurse provided the rationale for doing so within a certain time.

In "A User-Activity-Centric Framework for Access Control in Online Social Networks," Jaehong Park and his colleagues introduce an approach that separates users' main activities from administrative activities (performed by users or on their behalf). A key motivation for this approach is to help express and enforce policies that capture users' preferences in terms of how they interact with others and how they wish to modulate others' interactions — for example, when a parent controls the policies by which a child interacts with others.

Finally, in this Industry Perspective video extra, Steve Lipner, Director of Program Management for Trustworthy Computing Security at Microsoft, discusses some of the real-world challenges and approaches he has seen in his 43 years in the field of security.

To Probe Further

In addition to these articles, you might want to consult the following resources.

Acknowledgments

Thanks to Amit Chopra and Laurie Williams for comments. Thanks to the US Army Research Office for support under the Science of Security Lablet grant.

Munindar P. SinghMunindar P. Singh is a professor in the department of computer science at North Carolina State University. His research interests include multiagent systems and service-oriented computing with a special emphasis on the study of trust and privacy from the perspective of norms. Singh was editor in chief of IEEE Internet Computing from 1999 to 2002, and serves on several editorial boards. He served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. Munindar's research has been recognized with awards and sponsorship by the US Army Research Laboratory, US Army Research Office, Cisco, DARPA, Ericsson, IBM, Intel, Joint Oceanographic Institutions, US National Science Foundation, and Xerox. Seventeen students have received PhD degrees under his direction. Munindar is an IEEE Fellow. Contact him at singh@ncsu.edu.

Average (0 Votes)