Submission deadline: 20 December 2021
Publication: July/August 2022
This special issue will explore emerging security and privacy issues related to machine learning and artificial intelligence techniques, which are increasingly deployed for automated decisions in many critical applications today. With the advancement of machine learning and deep learning and their use in health care, finance, autonomous vehicles, personalized recommendations, and cybersecurity, understanding the security and privacy vulnerabilities of these methods and developing resilient defenses becomes extremely important. Early work in adversarial machine learning showed the existence of adversarial examples, data samples that can evade a machine learning model at deployment time. Other threats against machine learning include poisoning attacks in which an adversary controls a subset of data at training time, and privacy attacks in which an adversary is interested in learning sensitive information about the training data and model parameters. Several approaches for defending against these attacks include methods from robust optimization, certified defenses, and formal methods. Consequently, there is a need to understand these wide range of threats against machine learning, design resilient defenses, and address the open problems in securing machine learning.
We seek papers on all topics related to machine learning security and privacy, including:
For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.
Please email the guest editors at sp4-22@computer.org.
Guest Editors
Nathalie Baracaldo Angel, IBM Research, USA
Alina Oprea, Northeastern University, USA