Improving Cybersecurity: User Accountability and Sociotechnical Systems

Guest Editors’ Introduction • Özgür Kafalı and Munindar Singh • April 2017

Read the Guest Editors’ Introduction in
Spanish   |  Chinese

Translations by Osvaldo Perez and Tiejun Huang

Listen to the Guest Editors' Introduction

English (Steve Woods):


Spanish (Martin Omana):



Improving Cybersecurity: User Accountability and Sociotechnical Systems

With the increasing amount and importance of data that modern information systems maintain, security and privacy breaches pose an increasing threat to users, as well as system designers and operators. A key cybersecurity challenge is human error. According to a 2015 US Department of Defense report, accidental misuse caused by poor user and data management practices accounts for a majority of its cybersecurity incidents. The 2016 Healthcare Information and Management Systems Society cybersecurity survey discusses the same problem in hospitals and physician offices. For example, in 2010, a failure to erase patient data on photocopier hard drives led to a hospital accidentally disclosing 350,000 patient files — and facing a $1.2 million fine from the US Department of Health and Human Services.

No purely technical solution would adequately control user actions and prevent these data breaches. Consequently, researchers are increasingly considering human aspects of cybersecurity, with a focus on user accountability. Sociotechnical systems (STSs) are composed of both social (people and organizations) and technical (computers and networks) elements. The idea is that we might not be able to prevent certain user actions or inactions (and, to promote flexibility and utility, we should not try to), but we should capture a precise standard of interaction to which each user is held.

This April 2017 Computing Now theme discusses STSs and computational solutions to support accountability. We’ve selected six articles and a video that present an overview of accountability approaches, from implementing secure systems to investigating breaches in healthcare and online social networks.

Computational Models for Accountability

In the context of security and privacy, accountability is the property that ensures that the actions of an entity can be traced solely to that entity. The EU’s Data Protection Working Party describes accountability as "showing how responsibility is exercised and making this verifiable."

From a computational perspective, these definitions pose a great challenge. The following security and privacy technologies have limited effectiveness on their own when it comes to accountability:

  • Role-based access control mechanisms simply protect access to sensitive assets; they control only which users have the ability to take certain actions — not when, how, or whether those users actually take those actions. Access control might interfere with functionality when it prevents a user from performing a legitimate task, and yet might not prevent illegitimate actions.
  • Logging monitors events in the software system and enables digital forensics operations, or audits, when a data breach occurs. However, existing logging approaches cannot guarantee logging of all events relevant to identifying the actual cause of a breach. Moreover, they cannot guarantee logging of only those events that are relevant to a breach, creating privacy violations through the log data itself.

STSs combine these technical solutions with social models for improved accountability. New approaches seek to model STSs, and recent research formally represents social norms — understood as regulatory constructs, such as commitments, authorizations, and prohibitions.

The Articles

In "Program Actions as Actual Causes: A Building Block for Accountability," Anupam Datta and his colleagues investigate causation in connection with security protocols and violations to understand accountability, providing a case study of compromised notaries. Causal analysis helps identify why a violation occurred, which party is to blame, and how the protocol could be refined to prevent another such violation. The authors propose an approach that first determines the minimal set of actions from a log trace that led to the violation, and then removes irrelevant actions to identify the actual cause. The authors argue that their approach correctly assigns blame and provides supporting explanations to ensure accountability in security protocols.

"RahasNym: Pseudonymous Identity Management System for Protecting against Linkability" describes an approach that prevents linkability to users’ real identities in online systems, yet preserves accountability for their transactions. Hasini Gunasinghe and Elisa Bertino propose pseudonymous identities for users, which cryptographically store their real identities. The solution aims to ensure that whenever a user has misbehaved in a transaction, there is a means to de-anonymize the user's pseudonymous identity and take appropriate action.

Özgür Kafalı and his colleagues present social mechanisms to tackle user accountability in "Nane: Identifying Misuse Cases Using Temporal Norm Enactments." They formalize regulatory norms to represent stakeholder requirements and automatically generate potential misuse cases by identifying norm-violation conditions. Norm violations can be used to determine what needs to be logged for making users accountable. The authors argue that automating logging mechanisms would not only improve efficiency in forensics operations but also prevent new privacy violations through the logs.

Focusing on intentional misuses from insider threats, "AccountableMR: Toward Accountable MapReduce Systems" proposes a MapReduce model for distributed file management systems that detects misuses by annotating all data flow with usage restrictions (based on the reasons for access to a data collection). Such usage reasons are given as a taxonomy and can be verified for compliance with an intended purpose. Huseyin Ulusoy and his colleagues implement AccountableMR on top of Apache Hadoop and evaluate it over datasets from Twitter, Google Images, and a simulated hospital database. The authors argue that the implementation enforces fine-grained access and usage control for the price of a small overhead for generic MapReduce computation.

In "PriGuard: A Semantic Approach to Detect Privacy Violations in Online Social Networks," Nadin Kökciyan and Pınar Yolum take a semantic reasoning approach to understanding which user events in online social networks lead to privacy violations. The detection algorithm uses ontologies and description logic to detect and prevent various categories of violations (determined based on a survey of Facebook users), and performs well in large networks compared to other approaches over multiple violation scenarios. The authors state that PriGuard helps achieve accountability in social networks by initiating on-demand and proactive violation checks.

Denis Butin and Daniel Le Métayer offer guidelines for implementing accountability measures in "A Guide to End-to-End Privacy Accountability." They distinguish among three types of accountability within an organization: that of policy, procedures, and practice. The authors systematically analyze the accountability requirements of data controllers (entities collecting personal data) through the personal data lifecycle, identify the key accountability evidence for auditors, and prepare the evidence in the form of “personal data free” audit logs to prevent new privacy risks.

Video Perspectives

Jeremy Maxwell discusses precision medicine.

The Industry Perspective

This month's video features Jeremy Maxwell, a director of information security at Allscripts and a former US Department of Health and Human Services security advisor. Maxwell discusses privacy challenges related to patient consent and data segmentation in precision medicine, which aims to bring genomics and social determinants of health into the diagnosis and treatment processes to improve care.


Ensuring accountability is crucial for preserving data security and privacy in modern information systems, but efforts should not detract from user flexibility. Software should allow users to carry out their tasks without too much interference, but whenever a user misbehaves, the software should be able to hold that user accountable. To maximize users’ autonomy and capture their accountability, it is essential to model how they interact with the system within their organization.


Thanks to the US Department of Defense for support under the Science of Security Lablet program.


Guest Editors

Özgür Kafalı is a postdoctoral researcher in computer science at North Carolina State University. His research interests include computational logic and security and privacy in sociotechnical systems. Kafalı has a PhD in computer engineering from Boğaziçi University, Turkey. Contact him at

Munindar P. Singh is a computer science professor at North Carolina State University, where he is also a co-director of the Science of Security Lablet. His research interests include sociotechnical system engineering and governance. Singh is a AAAI Fellow, an IEEE Fellow, a former editor-in-chief of IEEE Internet Computing, and the current editor-in-chief of ACM Transactions on Internet Technology. Contact him at


Average (0 Votes)
The average rating is 0.0 stars out of 5.

Article Comments

Please log in to comment.