• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE-CS_LogoTM-orange
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
IEEE-CS_LogoTM-orange

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2026 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Sp
  • Home
  • / ...
  • /Magazines
  • /Sp

CLOSED Call for Papers: Special Issue on Machine Learning Security and Privacy

Submission deadline: 20 December 2021

Publication: July/August 2022

This special issue will explore emerging security and privacy issues related to machine learning and artificial intelligence techniques, which are increasingly deployed for automated decisions in many critical applications today. With the advancement of machine learning and deep learning and their use in health care, finance, autonomous vehicles, personalized recommendations, and cybersecurity, understanding the security and privacy vulnerabilities of these methods and developing resilient defenses becomes extremely important. Early work in adversarial machine learning showed the existence of adversarial examples, data samples that can evade a machine learning model at deployment time. Other threats against machine learning include poisoning attacks in which an adversary controls a subset of data at training time, and privacy attacks in which an adversary is interested in learning sensitive information about the training data and model parameters. Several approaches for defending against these attacks include methods from robust optimization, certified defenses, and formal methods. Consequently, there is a need to understand these wide range of threats against machine learning, design resilient defenses, and address the open problems in securing machine learning.

We seek papers on all topics related to machine learning security and privacy, including:

  • Applications of machine learning and artificial intelligence to security problems, such as spam detection, forensics, malware detection, and user authentication
  • Evasion attacks and defenses against machine learning and deep learning methods
  • Poisoning attacks against machine learning at training time, such as backdoor poisoning and targeted poisoning attacks, and corresponding defenses
  • Privacy attacks against machine learning, such as membership inference, reconstruction attacks, and model extraction, and corresponding defenses
  • Techniques for securing AI and ML algorithms, such as adversarial learning, robust optimization, and formal methods
  • Differential privacy for machine learning and other rigorous notions of privacy
  • Adversarial machine learning in specific applications, including NLP, autonomous vehicles, healthcare, speech recognition, and cybersecurity
  • Methods for federated learning and their security and privacy
  • Secure multi-party computation techniques for machine learning
  • Side channel attacks on machine learning
  • System security techniques for securing machine learning

Submission Guidelines

For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.

Questions?

Please email the guest editors at sp4-22@computer.org.

Guest Editors

Nathalie Baracaldo Angel, IBM Research, USA

Alina Oprea, Northeastern University, USA

LATEST NEWS
From CMDB to Dynamic Digital Twins: Lessons Learned in Building Enterprise Digital Brains
From CMDB to Dynamic Digital Twins: Lessons Learned in Building Enterprise Digital Brains
An Evaluation of Autoencoder Architectures for Fraud Detection in Credit Card Transactions
An Evaluation of Autoencoder Architectures for Fraud Detection in Credit Card Transactions
Parallel Systems, Leadership, and Research Strategy in Computing: an Interview with Jean-Luc Gaudiot
Parallel Systems, Leadership, and Research Strategy in Computing: an Interview with Jean-Luc Gaudiot
Why Your Computer Science Degree Is No Longer Enough in 2026
Why Your Computer Science Degree Is No Longer Enough in 2026
Episode 2 | Grow Your Career in Hardware Engineering
Episode 2 | Grow Your Career in Hardware Engineering
Read Next

From CMDB to Dynamic Digital Twins: Lessons Learned in Building Enterprise Digital Brains

An Evaluation of Autoencoder Architectures for Fraud Detection in Credit Card Transactions

Parallel Systems, Leadership, and Research Strategy in Computing: an Interview with Jean-Luc Gaudiot

Why Your Computer Science Degree Is No Longer Enough in 2026

Episode 2 | Grow Your Career in Hardware Engineering

Computing’s Top 30: Hariharan Rogothaman

Computing’s Top 30: Amod Agrawal

IEEE Quantum Week 2026 to Unveil the Latest in Quantum Computing

Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter