• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Sp
  • Home
  • / ...
  • /Magazines
  • /Sp

Call For Papers: Special Issue on Security and Privacy of Generative AI

IEEE Security & Privacy seeks submissions for this upcoming special issue.

Important Deadlines:

Submission deadline: 06 March 2025

Publication: September/October 2025


Deep Learning has made remarkable progress in various real-world applications ranging from robotics and image processing to medical applications. While there are many deep learning approaches and algorithms used today, few have made such a widespread impact as those belonging to the generative artificial intelligence (AI) domain. Generative AI involves the development of models which learn the underlying distribution of training data. Such models are then capable of generating new data samples with characteristics similar to those of the original dataset. Common examples of generative AI include generative adversarial networks (GANs), variational autoencoders (VAE), and transformers. In the last few years, generative AI and AI chatbots have made revolutionary progress not only from the technical side but also through their societal impact. As such, generative AI moved from being only a research topic into something equally interesting for academia, industry, and general users.

One domain where generative AI is making significant improvements is security by allowing better, more secure designs but also more powerful evaluations of the security of systems. Unfortunately, generative AI is also susceptible to various attacks, which undermine its security and privacy. This special issue is dedicated to showcasing the latest technical advancements in emerging technologies related to generative AI and security.

TOPIC SUMMARY:

To give a comprehensive introduction, we plan to solicit papers presenting the latest developments in all aspects of security and generative AI. With the broad scope, we prioritize different topics as follows:

  1. Generative AI for security. This special issue is highly interested in the development of new AI-based attacks and defenses that use generative AI as a tool to improve/evaluate the security of systems. Potential topics include generative AI and malware analysis, generative AI and code generation, and generative AI and cryptography.
  2. Security of generative AI. This special issue looks forward to featuring papers that concentrate on the security of generative AI. Within this topic, we are interested in all flavors and input data types (images, text, sound, etc.) commonly used in generative AI. Possible topics of interest include adversarial examples, poisoning attacks, and centralized and decentralized settings. 

We invite submissions that extend and challenge current knowledge about the intersection of generative AI and security.

Suggested topics include, but are not limited to: 

  • Implementation attacks and generative AI
  • Malware analysis and generative AI
  • Security benchmarking of generative AI (LLMs)
  • Code generation, code line anomalies, and bug fixes with generative AI
  • Hardware design with generative AI
  • Watermarking and copyright protection of generative AI
  • Adversarial examples
  • Poisoning attacks
  • Privacy of generative AI
  • Jailbreaking attacks
  • Prompt injection and stealing attacks
  • Sponge attacks
  • Federated and decentralized learning
  • Explainable AI (XAI)
  • Safety of AI agents
  • Toxicity and harmfulness of AI-generated content
  • Detection of Deepfakes 
  • Red-teaming of generative AI (LLMs)
  • Fairness and machine interpretability


Submission Guidelines

For author information and submission criteria for full papers, please visit the Author Information page. As stated there, full papers should be 4900 – 7200 words in length. Please submit full papers through the IEEE Author Portal system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. There should be no more than 15 references. Related work should appear in a special separated box. Please submit only full papers intended for peer review, not opinion pieces, to the IEEE Author Portal.


Questions?

Contact the guest editors at sp5-25@computer.org.

  • Stjepan Picek, Radboud University, The Netherlands
  • Lorenzo Cavallaro, University College London, UK
  • Jason Xue, CSIRO’s Data61, Australia

LATEST NEWS
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Autonomous Observability: AI Agents That Debug AI
Autonomous Observability: AI Agents That Debug AI
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Copilot Ergonomics: UI Patterns that Reduce Cognitive Load
Copilot Ergonomics: UI Patterns that Reduce Cognitive Load
Read Next

IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT

Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)

Autonomous Observability: AI Agents That Debug AI

Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference

Copilot Ergonomics: UI Patterns that Reduce Cognitive Load

The Myth of AI Neutrality in Search Algorithms

Gen AI and LLMs: Rebuilding Trust in a Synthetic Information Age

How AI Is Transforming Fraud Detection in Financial Transactions

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter