• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Sp
  • Home
  • / ...
  • /Magazines
  • /Sp

CLOSED Call for Papers: Special Issue on Machine Learning Security and Privacy

Submission deadline: 20 December 2021 Publication: July/August 2022 This special issue will explore emerging security and privacy issues related to machine learning and artificial intelligence techniques, which are increasingly deployed for automated decisions in many critical applications today. With the advancement of machine learning and deep learning and their use in health care, finance, autonomous vehicles, personalized recommendations, and cybersecurity, understanding the security and privacy vulnerabilities of these methods and developing resilient defenses becomes extremely important. Early work in adversarial machine learning showed the existence of adversarial examples, data samples that can evade a machine learning model at deployment time. Other threats against machine learning include poisoning attacks in which an adversary controls a subset of data at training time, and privacy attacks in which an adversary is interested in learning sensitive information about the training data and model parameters. Several approaches for defending against these attacks include methods from robust optimization, certified defenses, and formal methods. Consequently, there is a need to understand these wide range of threats against machine learning, design resilient defenses, and address the open problems in securing machine learning. We seek papers on all topics related to machine learning security and privacy, including:
  • Applications of machine learning and artificial intelligence to security problems, such as spam detection, forensics, malware detection, and user authentication
  • Evasion attacks and defenses against machine learning and deep learning methods
  • Poisoning attacks against machine learning at training time, such as backdoor poisoning and targeted poisoning attacks, and corresponding defenses
  • Privacy attacks against machine learning, such as membership inference, reconstruction attacks, and model extraction, and corresponding defenses
  • Techniques for securing AI and ML algorithms, such as adversarial learning, robust optimization, and formal methods
  • Differential privacy for machine learning and other rigorous notions of privacy
  • Adversarial machine learning in specific applications, including NLP, autonomous vehicles, healthcare, speech recognition, and cybersecurity
  • Methods for federated learning and their security and privacy
  • Secure multi-party computation techniques for machine learning
  • Side channel attacks on machine learning
  • System security techniques for securing machine learning

Submission Guidelines

For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.

Questions?

Please email the guest editors at sp4-22@computer.org. Guest Editors Nathalie Baracaldo Angel, IBM Research, USA Alina Oprea, Northeastern University, USA
LATEST NEWS
Shaping the Future of HPC through Architectural Innovation and Industry Collaboration
Shaping the Future of HPC through Architectural Innovation and Industry Collaboration
Reimagining AI Hardware: Neuromorphic Computing for Sustainable, Real-Time Intelligence
Reimagining AI Hardware: Neuromorphic Computing for Sustainable, Real-Time Intelligence
Quantum Insider Session Series: Strategic Networking in the Quantum Ecosystem for Collective Success
Quantum Insider Session Series: Strategic Networking in the Quantum Ecosystem for Collective Success
Computing’s Top 30: Sukanya S. Meher
Computing’s Top 30: Sukanya S. Meher
Securing the Software Supply Chain: Challenges, Tools, and Regulatory Forces
Securing the Software Supply Chain: Challenges, Tools, and Regulatory Forces
Read Next

Shaping the Future of HPC through Architectural Innovation and Industry Collaboration

Reimagining AI Hardware: Neuromorphic Computing for Sustainable, Real-Time Intelligence

Quantum Insider Session Series: Strategic Networking in the Quantum Ecosystem for Collective Success

Computing’s Top 30: Sukanya S. Meher

Securing the Software Supply Chain: Challenges, Tools, and Regulatory Forces

Computing’s Top 30: Tejas Padliya

Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration

IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter