, Yahoo! Research
, Palo Alto Research Center
, Palo Alto Research Center
Pages: pp. 10-12
Although we increasingly hear that the Internet is a risky place, we might not yet see the threat as personal, not to mention social or organizational. Threats are directly embodied in the form of malicious users — as phishing attacks and other forms of social manipulation (the classic "con") — but they increasingly use socially generated information. We generate data and metadata by our actions on the Web. Those we trust generate information about us or that can be linked to us. For example, your social network as embodied in MySpace or Facebook becomes fodder for more effective automated phishing attacks (see www.betanews.com/article/CrossSite_Scripting_Worm_Hits_MySpace/1129232391) or a highway across which specialized malware can spread. 1 Burying the problem in the infrastructure hasn't proven effective (68 percent of deployed Web server security certificates are currently invalid, for example 2). We must find ways to address this growing challenge. One approach has been to increase people's general knowledge about existing dangers and the steps they can take to protect themselves. Information sharing of this kind has met with some — but not a lot of — success. Another approach is to design better applications and interfaces that offer instruction and reflection as well as protection.
Recent research on the usability of security technologies — often termed HCISEC (joining human-computer interaction with security) — tries to put humans in the loop and views usability as a key component for both accepting security technologies and using them correctly. 3 Unfortunately, many studies are unrealistic — that is, they lack ecological validity. Put simply, there are study-design problems and ethical issues involved in sufficiently simulating a security violation so that people act as they would if they really were victims of such an attack. 4,5
Further, most of this work focuses only on the interface between the user and the computer system, aiming at improving the basic usability of security mechanisms as experienced at the userinterface. Little work considers the broader task context — the social and organizational settings in which the users' tasks take place. What makes this rather myopic approach to computer security problematic is that one person alone can't make a system secure.
Taking a social and human-activity-centered approach, security is systemic, so we must assess usability — along with usefulness (Is this solving a felt problem? Is it serving a purpose?) and practicality (Is the cost greater than the benefit?) — as part of people's everyday activities. 1 Effective security policy must bow to social and organizational realities; if you make a system secure at the cost of preventing the organizational interactions you designed it to support, the system is a failure. Negotiating, instituting, and maintaining real-world security procedures and practices is a social activity, and the resulting social protocols often form a key component in enforcing security policy. To offer an analogy, if you want to get fit, you can exercise until you drop, but until you address all your lifestyle habits, you aren't going to reach peak condition — thinking systemically is important.
Work in this space sits at the intersection of computer science, psychology, sociology, anthropology, and system design, and requires collaboration between researchers in these different areas. Some thinking has begun to emerge along these lines, with important figures in the computer security field becoming more actively involved in creating contexts for conversations between these key research areas (see the " Related Resources for Useful Security" sidebar for upcoming workshops on this topic).
In this vein, we sought to create an issue of IC around thinking more critically about the sociotechnical problems that computer and data security impose. We sought articles that would go beyond the interface to those at the center of Internet, data, and computer security issues and would look at how social and organizational forces determine both security requirements and how to address them. We called for case studies and field investigations of everyday data, computer, and network security practices; analyses of individual, organizational, and cultural models of risk assessment for practical Internet and computer security protocols; articles that addressed trust and risk models that system designers and developers could exploit in the design process; articles on novel interfaces, applications, and infrastructures designed to address security needs and concerns; broader new paradigms and approaches for networked computer and data security; reflections on evaluation methods that study technology situated in its expected social and organizational context; and articles that addressed the design of adaptive and reflective systems.
Despite wide distribution of the call for papers and conversation about the issue — and considerable interest shown — we got very few submissions that actually offered a higher-level, socially grounded perspective on issues that could help practitioners learn about the area and about approaches beyond quick fixes and technocentric, limited-utility solutions. We were somewhat surprised. Perhaps sitting in the middle of many areas of thought means no one feels like this topic is tractable or interesting. More likely, the conversation needs to begin in earnest with open workshops and conference tracks dedicated to the difficult work of problem-centered, multidisciplinary perspectives. Many submissions offered excellent analyses but didn't address the broader issue of how to have an effective multidisciplinary conversation. The two articles we introduce here confronted the issues head-on, acknowledging the topic's complexity and the likelihood that an easy solution isn't possible. These articles also offer citations for further delving into this emerging area of investigation.
In "A Brief Introduction to Usable Security," Bryan D. Payne and W. Keith Edwards give a historical account of design research into the tension between making information secure and keeping our computer systems useful for the tasks they support. Using examples from technologies for user authentication and email encryption, they illustrate how usable security isn't solely a matter of making interfaces to security measures usable but might also involve deeper structural considerations and the understandings that people bring to security. Their overview concludes with a reprise of current design guidelines for usable security, which they use to critique case studies of research exemplifying successes and failures in the area.
One key challenge in making security usable is determining exactly who the "users" are — all too often, we consider only end users and ignore the (sometimes convergent, sometimes conflicting) usability needs of developers and systems administrators. "Searching for the Right Fit: Balancing IT Security Management Model Trade-Offs" presents a new case study by Kirstie Hawkey, Kasia Muldner, and Konstantin Beznosov that examines organizational structure's role on the effectiveness of IT security administrators. Through extensive in situ interviews of IT professionals in an organization that moved from a centralized to a distributed security administration structure, the authors were able to examine issues of authority and changing organizational goals for security management within these different frameworks.
We look forward to more extensive discussions on useful security that go to the center of Internet, data, and computer security issues, and that look at how social and organizational forces determine both security requirements and how to address them — systemically.
We thank all those who submitted their work, as well as the hard-working reviewers who made available their time and expertise on the wide range of issues in this difficult problem area.
Here are a few resources, as well as places in which the discussion on useful computer security might move forward.