CLOSED Call for Papers: Special Issue on Machine Learning Security and Privacy

Share this on:
Submissions Due: 20 December 2021

Submission deadline: 20 December 2021

Publication: July/August 2022

This special issue will explore emerging security and privacy issues related to machine learning and artificial intelligence techniques, which are increasingly deployed for automated decisions in many critical applications today. With the advancement of machine learning and deep learning and their use in health care, finance, autonomous vehicles, personalized recommendations, and cybersecurity, understanding the security and privacy vulnerabilities of these methods and developing resilient defenses becomes extremely important. Early work in adversarial machine learning showed the existence of adversarial examples, data samples that can evade a machine learning model at deployment time. Other threats against machine learning include poisoning attacks in which an adversary controls a subset of data at training time, and privacy attacks in which an adversary is interested in learning sensitive information about the training data and model parameters. Several approaches for defending against these attacks include methods from robust optimization, certified defenses, and formal methods. Consequently, there is a need to understand these wide range of threats against machine learning, design resilient defenses, and address the open problems in securing machine learning.

We seek papers on all topics related to machine learning security and privacy, including:

  • Applications of machine learning and artificial intelligence to security problems, such as spam detection, forensics, malware detection, and user authentication
  • Evasion attacks and defenses against machine learning and deep learning methods
  • Poisoning attacks against machine learning at training time, such as backdoor poisoning and targeted poisoning attacks, and corresponding defenses
  • Privacy attacks against machine learning, such as membership inference, reconstruction attacks, and model extraction, and corresponding defenses
  • Techniques for securing AI and ML algorithms, such as adversarial learning, robust optimization, and formal methods
  • Differential privacy for machine learning and other rigorous notions of privacy
  • Adversarial machine learning in specific applications, including NLP, autonomous vehicles, healthcare, speech recognition, and cybersecurity
  • Methods for federated learning and their security and privacy
  • Secure multi-party computation techniques for machine learning
  • Side channel attacks on machine learning
  • System security techniques for securing machine learning

Submission Guidelines

For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.

Questions?

Please email the guest editors at sp4-22@computer.org.

Guest Editors

Nathalie Baracaldo Angel, IBM Research, USA
Alina Oprea, Northeastern University, USA